Test Report: KVM_Linux_crio 18453

                    
                      9277aac12dad2c88a60ac507f67489f1590ebf0d:2024-03-19:33652
                    
                

Test fail (31/316)

Order failed test Duration
39 TestAddons/parallel/Ingress 153.34
53 TestAddons/StoppedEnableDisable 154.37
105 TestFunctional/parallel/PersistentVolumeClaim 220.11
172 TestMultiControlPlane/serial/StopSecondaryNode 142.1
174 TestMultiControlPlane/serial/RestartSecondaryNode 53.84
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 427.17
179 TestMultiControlPlane/serial/StopCluster 142.19
180 TestMultiControlPlane/serial/RestartCluster 719.42
236 TestMultiNode/serial/RestartKeepsNodes 310.3
238 TestMultiNode/serial/StopMultiNode 141.43
245 TestPreload 278.25
253 TestKubernetesUpgrade 451.13
288 TestPause/serial/SecondStartNoReconfiguration 59.74
290 TestStartStop/group/old-k8s-version/serial/FirstStart 311.55
297 TestStartStop/group/no-preload/serial/Stop 139
300 TestStartStop/group/embed-certs/serial/Stop 139.11
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.57
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 81.06
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.17
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
314 TestStartStop/group/old-k8s-version/serial/SecondStart 748.46
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.29
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.44
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.23
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.4
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 462.89
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 536.57
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 238.19
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 116.67
x
+
TestAddons/parallel/Ingress (153.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-630101 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-630101 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-630101 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [64517821-4670-447e-8ddc-b3df143a2aae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [64517821-4670-447e-8ddc-b3df143a2aae] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008940357s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-630101 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.931487505s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-630101 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.203
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-630101 addons disable ingress-dns --alsologtostderr -v=1: (1.301789405s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-630101 addons disable ingress --alsologtostderr -v=1: (7.80378589s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-630101 -n addons-630101
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-630101 logs -n 25: (1.401387705s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-516738                                                                     | download-only-516738 | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC | 19 Mar 24 19:06 UTC |
	| delete  | -p download-only-454018                                                                     | download-only-454018 | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC | 19 Mar 24 19:06 UTC |
	| delete  | -p download-only-031263                                                                     | download-only-031263 | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC | 19 Mar 24 19:06 UTC |
	| delete  | -p download-only-516738                                                                     | download-only-516738 | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC | 19 Mar 24 19:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-144883 | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC |                     |
	|         | binary-mirror-144883                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44349                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-144883                                                                     | binary-mirror-144883 | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC | 19 Mar 24 19:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC |                     |
	|         | addons-630101                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC |                     |
	|         | addons-630101                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-630101 --wait=true                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:06 UTC | 19 Mar 24 19:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-630101 addons                                                                        | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | addons-630101                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | -p addons-630101                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-630101 ssh cat                                                                       | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | /opt/local-path-provisioner/pvc-6a4de478-6c61-4a98-899b-4b8888bdf238_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-630101 addons disable                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-630101 ip                                                                            | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	| addons  | addons-630101 addons disable                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | -p addons-630101                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | addons-630101                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-630101 ssh curl -s                                                                   | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-630101 addons                                                                        | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-630101 addons disable                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-630101 addons                                                                        | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:10 UTC | 19 Mar 24 19:10 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-630101 ip                                                                            | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:12 UTC | 19 Mar 24 19:12 UTC |
	| addons  | addons-630101 addons disable                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:12 UTC | 19 Mar 24 19:13 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-630101 addons disable                                                                | addons-630101        | jenkins | v1.32.0 | 19 Mar 24 19:13 UTC | 19 Mar 24 19:13 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:06:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:06:32.542952   18263 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:06:32.543042   18263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:06:32.543050   18263 out.go:304] Setting ErrFile to fd 2...
	I0319 19:06:32.543055   18263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:06:32.543238   18263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:06:32.543793   18263 out.go:298] Setting JSON to false
	I0319 19:06:32.544596   18263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2891,"bootTime":1710872302,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:06:32.544649   18263 start.go:139] virtualization: kvm guest
	I0319 19:06:32.546674   18263 out.go:177] * [addons-630101] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:06:32.548083   18263 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:06:32.548097   18263 notify.go:220] Checking for updates...
	I0319 19:06:32.549427   18263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:06:32.550645   18263 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:06:32.551809   18263 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:06:32.553177   18263 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:06:32.554295   18263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:06:32.555491   18263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:06:32.585431   18263 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 19:06:32.586849   18263 start.go:297] selected driver: kvm2
	I0319 19:06:32.586859   18263 start.go:901] validating driver "kvm2" against <nil>
	I0319 19:06:32.586879   18263 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:06:32.587518   18263 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:06:32.587572   18263 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:06:32.601234   18263 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:06:32.601269   18263 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 19:06:32.601483   18263 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:06:32.601541   18263 cni.go:84] Creating CNI manager for ""
	I0319 19:06:32.601553   18263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:06:32.601559   18263 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 19:06:32.601597   18263 start.go:340] cluster config:
	{Name:addons-630101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-630101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:06:32.601679   18263 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:06:32.603393   18263 out.go:177] * Starting "addons-630101" primary control-plane node in "addons-630101" cluster
	I0319 19:06:32.604705   18263 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:06:32.604728   18263 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:06:32.604734   18263 cache.go:56] Caching tarball of preloaded images
	I0319 19:06:32.604801   18263 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:06:32.604812   18263 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:06:32.605102   18263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/config.json ...
	I0319 19:06:32.605122   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/config.json: {Name:mk51b070d37d70ca48d8ae70cfca53c0a3ef61aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:06:32.605232   18263 start.go:360] acquireMachinesLock for addons-630101: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:06:32.605275   18263 start.go:364] duration metric: took 31.21µs to acquireMachinesLock for "addons-630101"
	I0319 19:06:32.605293   18263 start.go:93] Provisioning new machine with config: &{Name:addons-630101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-630101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:06:32.605342   18263 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 19:06:32.606921   18263 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0319 19:06:32.607032   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:06:32.607067   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:06:32.620198   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0319 19:06:32.620585   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:06:32.621064   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:06:32.621083   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:06:32.621393   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:06:32.621541   18263 main.go:141] libmachine: (addons-630101) Calling .GetMachineName
	I0319 19:06:32.621713   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:32.621845   18263 start.go:159] libmachine.API.Create for "addons-630101" (driver="kvm2")
	I0319 19:06:32.621888   18263 client.go:168] LocalClient.Create starting
	I0319 19:06:32.621927   18263 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:06:32.758340   18263 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:06:32.870821   18263 main.go:141] libmachine: Running pre-create checks...
	I0319 19:06:32.870844   18263 main.go:141] libmachine: (addons-630101) Calling .PreCreateCheck
	I0319 19:06:32.871317   18263 main.go:141] libmachine: (addons-630101) Calling .GetConfigRaw
	I0319 19:06:32.871711   18263 main.go:141] libmachine: Creating machine...
	I0319 19:06:32.871725   18263 main.go:141] libmachine: (addons-630101) Calling .Create
	I0319 19:06:32.871857   18263 main.go:141] libmachine: (addons-630101) Creating KVM machine...
	I0319 19:06:32.873017   18263 main.go:141] libmachine: (addons-630101) DBG | found existing default KVM network
	I0319 19:06:32.873723   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:32.873583   18285 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0319 19:06:32.873741   18263 main.go:141] libmachine: (addons-630101) DBG | created network xml: 
	I0319 19:06:32.873750   18263 main.go:141] libmachine: (addons-630101) DBG | <network>
	I0319 19:06:32.873756   18263 main.go:141] libmachine: (addons-630101) DBG |   <name>mk-addons-630101</name>
	I0319 19:06:32.873778   18263 main.go:141] libmachine: (addons-630101) DBG |   <dns enable='no'/>
	I0319 19:06:32.873798   18263 main.go:141] libmachine: (addons-630101) DBG |   
	I0319 19:06:32.873811   18263 main.go:141] libmachine: (addons-630101) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0319 19:06:32.873820   18263 main.go:141] libmachine: (addons-630101) DBG |     <dhcp>
	I0319 19:06:32.873830   18263 main.go:141] libmachine: (addons-630101) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0319 19:06:32.873839   18263 main.go:141] libmachine: (addons-630101) DBG |     </dhcp>
	I0319 19:06:32.873854   18263 main.go:141] libmachine: (addons-630101) DBG |   </ip>
	I0319 19:06:32.873864   18263 main.go:141] libmachine: (addons-630101) DBG |   
	I0319 19:06:32.873874   18263 main.go:141] libmachine: (addons-630101) DBG | </network>
	I0319 19:06:32.873888   18263 main.go:141] libmachine: (addons-630101) DBG | 
	I0319 19:06:32.878868   18263 main.go:141] libmachine: (addons-630101) DBG | trying to create private KVM network mk-addons-630101 192.168.39.0/24...
	I0319 19:06:32.938428   18263 main.go:141] libmachine: (addons-630101) DBG | private KVM network mk-addons-630101 192.168.39.0/24 created
	I0319 19:06:32.938453   18263 main.go:141] libmachine: (addons-630101) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101 ...
	I0319 19:06:32.938483   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:32.938386   18285 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:06:32.938501   18263 main.go:141] libmachine: (addons-630101) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:06:32.938591   18263 main.go:141] libmachine: (addons-630101) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:06:33.175059   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:33.174941   18285 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa...
	I0319 19:06:33.279410   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:33.279262   18285 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/addons-630101.rawdisk...
	I0319 19:06:33.279446   18263 main.go:141] libmachine: (addons-630101) DBG | Writing magic tar header
	I0319 19:06:33.279459   18263 main.go:141] libmachine: (addons-630101) DBG | Writing SSH key tar header
	I0319 19:06:33.279473   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:33.279371   18285 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101 ...
	I0319 19:06:33.279486   18263 main.go:141] libmachine: (addons-630101) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101 (perms=drwx------)
	I0319 19:06:33.279500   18263 main.go:141] libmachine: (addons-630101) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:06:33.279507   18263 main.go:141] libmachine: (addons-630101) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:06:33.279521   18263 main.go:141] libmachine: (addons-630101) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:06:33.279530   18263 main.go:141] libmachine: (addons-630101) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:06:33.279541   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101
	I0319 19:06:33.279556   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:06:33.279567   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:06:33.279576   18263 main.go:141] libmachine: (addons-630101) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:06:33.279583   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:06:33.279589   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:06:33.279595   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:06:33.279601   18263 main.go:141] libmachine: (addons-630101) DBG | Checking permissions on dir: /home
	I0319 19:06:33.279606   18263 main.go:141] libmachine: (addons-630101) DBG | Skipping /home - not owner
	I0319 19:06:33.279635   18263 main.go:141] libmachine: (addons-630101) Creating domain...
	I0319 19:06:33.280620   18263 main.go:141] libmachine: (addons-630101) define libvirt domain using xml: 
	I0319 19:06:33.280641   18263 main.go:141] libmachine: (addons-630101) <domain type='kvm'>
	I0319 19:06:33.280648   18263 main.go:141] libmachine: (addons-630101)   <name>addons-630101</name>
	I0319 19:06:33.280653   18263 main.go:141] libmachine: (addons-630101)   <memory unit='MiB'>4000</memory>
	I0319 19:06:33.280659   18263 main.go:141] libmachine: (addons-630101)   <vcpu>2</vcpu>
	I0319 19:06:33.280663   18263 main.go:141] libmachine: (addons-630101)   <features>
	I0319 19:06:33.280667   18263 main.go:141] libmachine: (addons-630101)     <acpi/>
	I0319 19:06:33.280673   18263 main.go:141] libmachine: (addons-630101)     <apic/>
	I0319 19:06:33.280678   18263 main.go:141] libmachine: (addons-630101)     <pae/>
	I0319 19:06:33.280686   18263 main.go:141] libmachine: (addons-630101)     
	I0319 19:06:33.280699   18263 main.go:141] libmachine: (addons-630101)   </features>
	I0319 19:06:33.280717   18263 main.go:141] libmachine: (addons-630101)   <cpu mode='host-passthrough'>
	I0319 19:06:33.280723   18263 main.go:141] libmachine: (addons-630101)   
	I0319 19:06:33.280734   18263 main.go:141] libmachine: (addons-630101)   </cpu>
	I0319 19:06:33.280742   18263 main.go:141] libmachine: (addons-630101)   <os>
	I0319 19:06:33.280746   18263 main.go:141] libmachine: (addons-630101)     <type>hvm</type>
	I0319 19:06:33.280752   18263 main.go:141] libmachine: (addons-630101)     <boot dev='cdrom'/>
	I0319 19:06:33.280756   18263 main.go:141] libmachine: (addons-630101)     <boot dev='hd'/>
	I0319 19:06:33.280765   18263 main.go:141] libmachine: (addons-630101)     <bootmenu enable='no'/>
	I0319 19:06:33.280769   18263 main.go:141] libmachine: (addons-630101)   </os>
	I0319 19:06:33.280781   18263 main.go:141] libmachine: (addons-630101)   <devices>
	I0319 19:06:33.280793   18263 main.go:141] libmachine: (addons-630101)     <disk type='file' device='cdrom'>
	I0319 19:06:33.280829   18263 main.go:141] libmachine: (addons-630101)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/boot2docker.iso'/>
	I0319 19:06:33.280852   18263 main.go:141] libmachine: (addons-630101)       <target dev='hdc' bus='scsi'/>
	I0319 19:06:33.280864   18263 main.go:141] libmachine: (addons-630101)       <readonly/>
	I0319 19:06:33.280875   18263 main.go:141] libmachine: (addons-630101)     </disk>
	I0319 19:06:33.280893   18263 main.go:141] libmachine: (addons-630101)     <disk type='file' device='disk'>
	I0319 19:06:33.280916   18263 main.go:141] libmachine: (addons-630101)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:06:33.280943   18263 main.go:141] libmachine: (addons-630101)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/addons-630101.rawdisk'/>
	I0319 19:06:33.280961   18263 main.go:141] libmachine: (addons-630101)       <target dev='hda' bus='virtio'/>
	I0319 19:06:33.280979   18263 main.go:141] libmachine: (addons-630101)     </disk>
	I0319 19:06:33.280998   18263 main.go:141] libmachine: (addons-630101)     <interface type='network'>
	I0319 19:06:33.281012   18263 main.go:141] libmachine: (addons-630101)       <source network='mk-addons-630101'/>
	I0319 19:06:33.281024   18263 main.go:141] libmachine: (addons-630101)       <model type='virtio'/>
	I0319 19:06:33.281034   18263 main.go:141] libmachine: (addons-630101)     </interface>
	I0319 19:06:33.281045   18263 main.go:141] libmachine: (addons-630101)     <interface type='network'>
	I0319 19:06:33.281056   18263 main.go:141] libmachine: (addons-630101)       <source network='default'/>
	I0319 19:06:33.281066   18263 main.go:141] libmachine: (addons-630101)       <model type='virtio'/>
	I0319 19:06:33.281080   18263 main.go:141] libmachine: (addons-630101)     </interface>
	I0319 19:06:33.281096   18263 main.go:141] libmachine: (addons-630101)     <serial type='pty'>
	I0319 19:06:33.281106   18263 main.go:141] libmachine: (addons-630101)       <target port='0'/>
	I0319 19:06:33.281111   18263 main.go:141] libmachine: (addons-630101)     </serial>
	I0319 19:06:33.281123   18263 main.go:141] libmachine: (addons-630101)     <console type='pty'>
	I0319 19:06:33.281138   18263 main.go:141] libmachine: (addons-630101)       <target type='serial' port='0'/>
	I0319 19:06:33.281151   18263 main.go:141] libmachine: (addons-630101)     </console>
	I0319 19:06:33.281161   18263 main.go:141] libmachine: (addons-630101)     <rng model='virtio'>
	I0319 19:06:33.281173   18263 main.go:141] libmachine: (addons-630101)       <backend model='random'>/dev/random</backend>
	I0319 19:06:33.281187   18263 main.go:141] libmachine: (addons-630101)     </rng>
	I0319 19:06:33.281202   18263 main.go:141] libmachine: (addons-630101)     
	I0319 19:06:33.281212   18263 main.go:141] libmachine: (addons-630101)     
	I0319 19:06:33.281227   18263 main.go:141] libmachine: (addons-630101)   </devices>
	I0319 19:06:33.281234   18263 main.go:141] libmachine: (addons-630101) </domain>
	I0319 19:06:33.281241   18263 main.go:141] libmachine: (addons-630101) 
	I0319 19:06:33.286853   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:3c:01:36 in network default
	I0319 19:06:33.287397   18263 main.go:141] libmachine: (addons-630101) Ensuring networks are active...
	I0319 19:06:33.287418   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:33.287936   18263 main.go:141] libmachine: (addons-630101) Ensuring network default is active
	I0319 19:06:33.288292   18263 main.go:141] libmachine: (addons-630101) Ensuring network mk-addons-630101 is active
	I0319 19:06:33.288766   18263 main.go:141] libmachine: (addons-630101) Getting domain xml...
	I0319 19:06:33.289341   18263 main.go:141] libmachine: (addons-630101) Creating domain...
	I0319 19:06:34.623905   18263 main.go:141] libmachine: (addons-630101) Waiting to get IP...
	I0319 19:06:34.624661   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:34.625052   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:34.625082   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:34.625026   18285 retry.go:31] will retry after 200.110231ms: waiting for machine to come up
	I0319 19:06:34.826242   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:34.826742   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:34.826762   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:34.826690   18285 retry.go:31] will retry after 241.067761ms: waiting for machine to come up
	I0319 19:06:35.069110   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:35.069524   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:35.069552   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:35.069484   18285 retry.go:31] will retry after 401.3605ms: waiting for machine to come up
	I0319 19:06:35.471883   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:35.472436   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:35.472466   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:35.472391   18285 retry.go:31] will retry after 539.272121ms: waiting for machine to come up
	I0319 19:06:36.013019   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:36.013413   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:36.013436   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:36.013368   18285 retry.go:31] will retry after 627.605821ms: waiting for machine to come up
	I0319 19:06:36.642043   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:36.642350   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:36.642377   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:36.642312   18285 retry.go:31] will retry after 852.905298ms: waiting for machine to come up
	I0319 19:06:37.496236   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:37.496587   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:37.496613   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:37.496548   18285 retry.go:31] will retry after 808.096629ms: waiting for machine to come up
	I0319 19:06:38.306011   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:38.306403   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:38.306432   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:38.306370   18285 retry.go:31] will retry after 1.066666425s: waiting for machine to come up
	I0319 19:06:39.374456   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:39.374775   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:39.374796   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:39.374741   18285 retry.go:31] will retry after 1.438177566s: waiting for machine to come up
	I0319 19:06:40.815161   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:40.815538   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:40.815563   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:40.815493   18285 retry.go:31] will retry after 1.425062928s: waiting for machine to come up
	I0319 19:06:42.243100   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:42.243536   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:42.243563   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:42.243486   18285 retry.go:31] will retry after 2.109751114s: waiting for machine to come up
	I0319 19:06:44.355979   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:44.356524   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:44.356551   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:44.356486   18285 retry.go:31] will retry after 2.255449259s: waiting for machine to come up
	I0319 19:06:46.613222   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:46.613637   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:46.613664   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:46.613592   18285 retry.go:31] will retry after 4.159926111s: waiting for machine to come up
	I0319 19:06:50.777174   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:50.777615   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find current IP address of domain addons-630101 in network mk-addons-630101
	I0319 19:06:50.777641   18263 main.go:141] libmachine: (addons-630101) DBG | I0319 19:06:50.777561   18285 retry.go:31] will retry after 4.073073034s: waiting for machine to come up
	I0319 19:06:54.853318   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:54.853789   18263 main.go:141] libmachine: (addons-630101) Found IP for machine: 192.168.39.203
	I0319 19:06:54.853810   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has current primary IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:54.853815   18263 main.go:141] libmachine: (addons-630101) Reserving static IP address...
	I0319 19:06:54.854187   18263 main.go:141] libmachine: (addons-630101) DBG | unable to find host DHCP lease matching {name: "addons-630101", mac: "52:54:00:8b:1a:da", ip: "192.168.39.203"} in network mk-addons-630101
	I0319 19:06:54.921841   18263 main.go:141] libmachine: (addons-630101) DBG | Getting to WaitForSSH function...
	I0319 19:06:54.921873   18263 main.go:141] libmachine: (addons-630101) Reserved static IP address: 192.168.39.203
	I0319 19:06:54.921885   18263 main.go:141] libmachine: (addons-630101) Waiting for SSH to be available...
	I0319 19:06:54.924315   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:54.924737   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:54.924773   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:54.924896   18263 main.go:141] libmachine: (addons-630101) DBG | Using SSH client type: external
	I0319 19:06:54.924921   18263 main.go:141] libmachine: (addons-630101) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa (-rw-------)
	I0319 19:06:54.924956   18263 main.go:141] libmachine: (addons-630101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:06:54.924969   18263 main.go:141] libmachine: (addons-630101) DBG | About to run SSH command:
	I0319 19:06:54.924979   18263 main.go:141] libmachine: (addons-630101) DBG | exit 0
	I0319 19:06:55.060164   18263 main.go:141] libmachine: (addons-630101) DBG | SSH cmd err, output: <nil>: 
	I0319 19:06:55.060468   18263 main.go:141] libmachine: (addons-630101) KVM machine creation complete!
	I0319 19:06:55.060763   18263 main.go:141] libmachine: (addons-630101) Calling .GetConfigRaw
	I0319 19:06:55.061263   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:55.061468   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:55.061640   18263 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:06:55.061659   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:06:55.062922   18263 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:06:55.062935   18263 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:06:55.062940   18263 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:06:55.062946   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.065020   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.065389   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.065416   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.065553   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:55.065727   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.065877   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.066002   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:55.066168   18263 main.go:141] libmachine: Using SSH client type: native
	I0319 19:06:55.066335   18263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0319 19:06:55.066345   18263 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:06:55.175561   18263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:06:55.175586   18263 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:06:55.175594   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.178293   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.178628   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.178651   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.178819   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:55.179009   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.179183   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.179321   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:55.179545   18263 main.go:141] libmachine: Using SSH client type: native
	I0319 19:06:55.179720   18263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0319 19:06:55.179733   18263 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:06:55.289387   18263 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:06:55.289513   18263 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:06:55.289528   18263 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:06:55.289538   18263 main.go:141] libmachine: (addons-630101) Calling .GetMachineName
	I0319 19:06:55.289780   18263 buildroot.go:166] provisioning hostname "addons-630101"
	I0319 19:06:55.289812   18263 main.go:141] libmachine: (addons-630101) Calling .GetMachineName
	I0319 19:06:55.289999   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.292404   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.292789   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.292819   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.292975   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:55.293162   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.293331   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.293503   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:55.293678   18263 main.go:141] libmachine: Using SSH client type: native
	I0319 19:06:55.293840   18263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0319 19:06:55.293853   18263 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-630101 && echo "addons-630101" | sudo tee /etc/hostname
	I0319 19:06:55.420393   18263 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630101
	
	I0319 19:06:55.420424   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.423259   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.423687   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.423723   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.423897   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:55.424120   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.424295   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.424439   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:55.424609   18263 main.go:141] libmachine: Using SSH client type: native
	I0319 19:06:55.424777   18263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0319 19:06:55.424795   18263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-630101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-630101/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-630101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:06:55.543335   18263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:06:55.543370   18263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:06:55.543391   18263 buildroot.go:174] setting up certificates
	I0319 19:06:55.543402   18263 provision.go:84] configureAuth start
	I0319 19:06:55.543411   18263 main.go:141] libmachine: (addons-630101) Calling .GetMachineName
	I0319 19:06:55.543655   18263 main.go:141] libmachine: (addons-630101) Calling .GetIP
	I0319 19:06:55.546118   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.546500   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.546534   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.546689   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.548683   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.549002   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.549024   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.549120   18263 provision.go:143] copyHostCerts
	I0319 19:06:55.549192   18263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:06:55.549326   18263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:06:55.549385   18263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:06:55.549429   18263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.addons-630101 san=[127.0.0.1 192.168.39.203 addons-630101 localhost minikube]
	I0319 19:06:55.714859   18263 provision.go:177] copyRemoteCerts
	I0319 19:06:55.714924   18263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:06:55.714950   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.717643   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.717998   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.718021   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.718246   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:55.718477   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.718639   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:55.718774   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:06:55.802978   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:06:55.830020   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0319 19:06:55.856949   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 19:06:55.883339   18263 provision.go:87] duration metric: took 339.925326ms to configureAuth
	I0319 19:06:55.883367   18263 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:06:55.883563   18263 config.go:182] Loaded profile config "addons-630101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:06:55.883680   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:55.886261   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.886574   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:55.886602   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:55.886775   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:55.886985   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.887127   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:55.887270   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:55.887434   18263 main.go:141] libmachine: Using SSH client type: native
	I0319 19:06:55.887646   18263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0319 19:06:55.887669   18263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:06:56.164026   18263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:06:56.164063   18263 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:06:56.164073   18263 main.go:141] libmachine: (addons-630101) Calling .GetURL
	I0319 19:06:56.165276   18263 main.go:141] libmachine: (addons-630101) DBG | Using libvirt version 6000000
	I0319 19:06:56.167599   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.167918   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.167950   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.168070   18263 main.go:141] libmachine: Docker is up and running!
	I0319 19:06:56.168084   18263 main.go:141] libmachine: Reticulating splines...
	I0319 19:06:56.168092   18263 client.go:171] duration metric: took 23.546191841s to LocalClient.Create
	I0319 19:06:56.168116   18263 start.go:167] duration metric: took 23.546270566s to libmachine.API.Create "addons-630101"
	I0319 19:06:56.168132   18263 start.go:293] postStartSetup for "addons-630101" (driver="kvm2")
	I0319 19:06:56.168144   18263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:06:56.168156   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:56.168360   18263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:06:56.168385   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:56.170486   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.170799   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.170825   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.170976   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:56.171218   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:56.171370   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:56.171523   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:06:56.256467   18263 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:06:56.261472   18263 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:06:56.261503   18263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:06:56.261587   18263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:06:56.261621   18263 start.go:296] duration metric: took 93.481035ms for postStartSetup
	I0319 19:06:56.261659   18263 main.go:141] libmachine: (addons-630101) Calling .GetConfigRaw
	I0319 19:06:56.262239   18263 main.go:141] libmachine: (addons-630101) Calling .GetIP
	I0319 19:06:56.264708   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.265061   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.265084   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.265233   18263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/config.json ...
	I0319 19:06:56.265398   18263 start.go:128] duration metric: took 23.660047249s to createHost
	I0319 19:06:56.265417   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:56.267471   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.267776   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.267806   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.267964   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:56.268136   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:56.268298   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:56.268464   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:56.268591   18263 main.go:141] libmachine: Using SSH client type: native
	I0319 19:06:56.268758   18263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0319 19:06:56.268768   18263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:06:56.377602   18263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710875216.346303960
	
	I0319 19:06:56.377631   18263 fix.go:216] guest clock: 1710875216.346303960
	I0319 19:06:56.377641   18263 fix.go:229] Guest: 2024-03-19 19:06:56.34630396 +0000 UTC Remote: 2024-03-19 19:06:56.265408641 +0000 UTC m=+23.765694921 (delta=80.895319ms)
	I0319 19:06:56.377690   18263 fix.go:200] guest clock delta is within tolerance: 80.895319ms
	I0319 19:06:56.377698   18263 start.go:83] releasing machines lock for "addons-630101", held for 23.772412495s
	I0319 19:06:56.377743   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:56.378009   18263 main.go:141] libmachine: (addons-630101) Calling .GetIP
	I0319 19:06:56.380696   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.381120   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.381150   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.381303   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:56.381840   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:56.382012   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:06:56.382107   18263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:06:56.382157   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:56.382208   18263 ssh_runner.go:195] Run: cat /version.json
	I0319 19:06:56.382231   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:06:56.384799   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.384983   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.385151   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.385180   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.385279   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:56.385300   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:56.385315   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:56.385503   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:06:56.385546   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:56.385645   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:06:56.385647   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:56.385781   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:06:56.385850   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:06:56.385895   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:06:56.466089   18263 ssh_runner.go:195] Run: systemctl --version
	I0319 19:06:56.494823   18263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:06:56.660374   18263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:06:56.666981   18263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:06:56.667040   18263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:06:56.686739   18263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:06:56.686769   18263 start.go:494] detecting cgroup driver to use...
	I0319 19:06:56.686841   18263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:06:56.705555   18263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:06:56.721175   18263 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:06:56.721223   18263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:06:56.736339   18263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:06:56.751153   18263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:06:56.871744   18263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:06:57.038468   18263 docker.go:233] disabling docker service ...
	I0319 19:06:57.038564   18263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:06:57.053584   18263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:06:57.067491   18263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:06:57.188380   18263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:06:57.308771   18263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:06:57.324852   18263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:06:57.345586   18263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:06:57.345652   18263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.356483   18263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:06:57.356529   18263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.367142   18263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.378672   18263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.390136   18263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:06:57.402676   18263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.415775   18263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.436613   18263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:06:57.448309   18263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:06:57.458601   18263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:06:57.458643   18263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:06:57.473602   18263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:06:57.483934   18263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:06:57.611828   18263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:06:57.769307   18263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:06:57.769402   18263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:06:57.775451   18263 start.go:562] Will wait 60s for crictl version
	I0319 19:06:57.775509   18263 ssh_runner.go:195] Run: which crictl
	I0319 19:06:57.779656   18263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:06:57.818881   18263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:06:57.818998   18263 ssh_runner.go:195] Run: crio --version
	I0319 19:06:57.852440   18263 ssh_runner.go:195] Run: crio --version
	I0319 19:06:57.884625   18263 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:06:57.886150   18263 main.go:141] libmachine: (addons-630101) Calling .GetIP
	I0319 19:06:57.888838   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:57.889165   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:06:57.889193   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:06:57.889421   18263 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:06:57.893967   18263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:06:57.907354   18263 kubeadm.go:877] updating cluster {Name:addons-630101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-630101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:06:57.907463   18263 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:06:57.907517   18263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:06:57.947269   18263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 19:06:57.947339   18263 ssh_runner.go:195] Run: which lz4
	I0319 19:06:57.951791   18263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 19:06:57.956352   18263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 19:06:57.956378   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 19:06:59.551406   18263 crio.go:462] duration metric: took 1.599638031s to copy over tarball
	I0319 19:06:59.551470   18263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 19:07:02.000611   18263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.449108178s)
	I0319 19:07:02.000644   18263 crio.go:469] duration metric: took 2.449215404s to extract the tarball
	I0319 19:07:02.000652   18263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 19:07:02.038884   18263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:07:02.079484   18263 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:07:02.079507   18263 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:07:02.079515   18263 kubeadm.go:928] updating node { 192.168.39.203 8443 v1.29.3 crio true true} ...
	I0319 19:07:02.079602   18263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-630101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-630101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:07:02.079663   18263 ssh_runner.go:195] Run: crio config
	I0319 19:07:02.123743   18263 cni.go:84] Creating CNI manager for ""
	I0319 19:07:02.123766   18263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:07:02.123778   18263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:07:02.123807   18263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-630101 NodeName:addons-630101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:07:02.123956   18263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-630101"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:07:02.124013   18263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:07:02.134525   18263 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:07:02.134594   18263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 19:07:02.144371   18263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0319 19:07:02.162014   18263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:07:02.179403   18263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0319 19:07:02.196829   18263 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0319 19:07:02.201020   18263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:07:02.214561   18263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:07:02.338056   18263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:07:02.354908   18263 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101 for IP: 192.168.39.203
	I0319 19:07:02.354932   18263 certs.go:194] generating shared ca certs ...
	I0319 19:07:02.354953   18263 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.355102   18263 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:07:02.437740   18263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt ...
	I0319 19:07:02.437766   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt: {Name:mk343485837b5fe90b3145b48fd17b7bf2ab009e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.437925   18263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key ...
	I0319 19:07:02.437936   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key: {Name:mk79191175e2284c4e1876728953a699c8f6653a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.438002   18263 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:07:02.622261   18263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt ...
	I0319 19:07:02.622285   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt: {Name:mk1bbb150c0284dfa10d88c2c5b16e2fb5cb3d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.622431   18263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key ...
	I0319 19:07:02.622441   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key: {Name:mkfa654346e1ea02f23adc1e332d087647a840fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.622506   18263 certs.go:256] generating profile certs ...
	I0319 19:07:02.622558   18263 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.key
	I0319 19:07:02.622572   18263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt with IP's: []
	I0319 19:07:02.787765   18263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt ...
	I0319 19:07:02.787792   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: {Name:mke335cc062eee910d3392f793db5510f77e9aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.787954   18263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.key ...
	I0319 19:07:02.787965   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.key: {Name:mk2757b0a96c795638b089d5e88ab9d84c83986b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.788032   18263 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.key.a75db73a
	I0319 19:07:02.788048   18263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.crt.a75db73a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203]
	I0319 19:07:02.870015   18263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.crt.a75db73a ...
	I0319 19:07:02.870042   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.crt.a75db73a: {Name:mk5ab673dcfcf5715abc0edf837c1fceb87cda8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.870189   18263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.key.a75db73a ...
	I0319 19:07:02.870201   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.key.a75db73a: {Name:mk53c2840380a05205bf75effcd8b6597e4a4495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:02.870262   18263 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.crt.a75db73a -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.crt
	I0319 19:07:02.870329   18263 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.key.a75db73a -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.key
	I0319 19:07:02.870372   18263 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.key
	I0319 19:07:02.870387   18263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.crt with IP's: []
	I0319 19:07:03.097120   18263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.crt ...
	I0319 19:07:03.097147   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.crt: {Name:mk7a41f11b8fbd8b816b53c6b9340b1f1ac26e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:03.097306   18263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.key ...
	I0319 19:07:03.097317   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.key: {Name:mk0314dca88c0aa3e5e6179622c26868fb9d03b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:03.097485   18263 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:07:03.097519   18263 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:07:03.097539   18263 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:07:03.097560   18263 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:07:03.098122   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:07:03.136433   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:07:03.166287   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:07:03.202230   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:07:03.229073   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0319 19:07:03.255961   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:07:03.282832   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:07:03.309015   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 19:07:03.335144   18263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:07:03.361963   18263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:07:03.380233   18263 ssh_runner.go:195] Run: openssl version
	I0319 19:07:03.386646   18263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:07:03.398532   18263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:07:03.403689   18263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:07:03.403744   18263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:07:03.410055   18263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:07:03.422205   18263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:07:03.427012   18263 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:07:03.427053   18263 kubeadm.go:391] StartCluster: {Name:addons-630101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-630101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:07:03.427111   18263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:07:03.427146   18263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:07:03.468793   18263 cri.go:89] found id: ""
	I0319 19:07:03.468854   18263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 19:07:03.479840   18263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 19:07:03.490233   18263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 19:07:03.500234   18263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 19:07:03.500255   18263 kubeadm.go:156] found existing configuration files:
	
	I0319 19:07:03.500308   18263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 19:07:03.509734   18263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 19:07:03.509772   18263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 19:07:03.519844   18263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 19:07:03.530236   18263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 19:07:03.530277   18263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 19:07:03.541145   18263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 19:07:03.551713   18263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 19:07:03.551779   18263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 19:07:03.562124   18263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 19:07:03.572678   18263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 19:07:03.572718   18263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 19:07:03.583572   18263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 19:07:03.774649   18263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 19:07:14.075952   18263 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 19:07:14.076024   18263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 19:07:14.076113   18263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 19:07:14.076304   18263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 19:07:14.076424   18263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 19:07:14.076512   18263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 19:07:14.078698   18263 out.go:204]   - Generating certificates and keys ...
	I0319 19:07:14.078786   18263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 19:07:14.078882   18263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 19:07:14.078988   18263 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 19:07:14.079072   18263 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 19:07:14.079155   18263 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 19:07:14.079225   18263 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 19:07:14.079296   18263 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 19:07:14.079472   18263 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-630101 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I0319 19:07:14.079566   18263 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 19:07:14.079705   18263 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-630101 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I0319 19:07:14.079797   18263 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 19:07:14.079888   18263 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 19:07:14.079950   18263 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 19:07:14.080032   18263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 19:07:14.080105   18263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 19:07:14.080187   18263 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 19:07:14.080251   18263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 19:07:14.080352   18263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 19:07:14.080427   18263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 19:07:14.080528   18263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 19:07:14.080607   18263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 19:07:14.082660   18263 out.go:204]   - Booting up control plane ...
	I0319 19:07:14.082784   18263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 19:07:14.082881   18263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 19:07:14.082980   18263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 19:07:14.083143   18263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 19:07:14.083257   18263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 19:07:14.083313   18263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 19:07:14.083490   18263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 19:07:14.083597   18263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003191 seconds
	I0319 19:07:14.083738   18263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 19:07:14.083914   18263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 19:07:14.083994   18263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 19:07:14.084250   18263 kubeadm.go:309] [mark-control-plane] Marking the node addons-630101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 19:07:14.084351   18263 kubeadm.go:309] [bootstrap-token] Using token: jxx3yk.rxs58csbbf15mjb4
	I0319 19:07:14.086500   18263 out.go:204]   - Configuring RBAC rules ...
	I0319 19:07:14.086634   18263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 19:07:14.086713   18263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 19:07:14.086858   18263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 19:07:14.086978   18263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 19:07:14.087114   18263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 19:07:14.087231   18263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 19:07:14.087376   18263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 19:07:14.087437   18263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 19:07:14.087508   18263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 19:07:14.087523   18263 kubeadm.go:309] 
	I0319 19:07:14.087604   18263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 19:07:14.087613   18263 kubeadm.go:309] 
	I0319 19:07:14.087720   18263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 19:07:14.087730   18263 kubeadm.go:309] 
	I0319 19:07:14.087763   18263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 19:07:14.087847   18263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 19:07:14.087922   18263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 19:07:14.087932   18263 kubeadm.go:309] 
	I0319 19:07:14.088019   18263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 19:07:14.088032   18263 kubeadm.go:309] 
	I0319 19:07:14.088067   18263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 19:07:14.088073   18263 kubeadm.go:309] 
	I0319 19:07:14.088142   18263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 19:07:14.088244   18263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 19:07:14.088385   18263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 19:07:14.088398   18263 kubeadm.go:309] 
	I0319 19:07:14.088520   18263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 19:07:14.088616   18263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 19:07:14.088631   18263 kubeadm.go:309] 
	I0319 19:07:14.088743   18263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jxx3yk.rxs58csbbf15mjb4 \
	I0319 19:07:14.088841   18263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 19:07:14.088874   18263 kubeadm.go:309] 	--control-plane 
	I0319 19:07:14.088884   18263 kubeadm.go:309] 
	I0319 19:07:14.088979   18263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 19:07:14.088986   18263 kubeadm.go:309] 
	I0319 19:07:14.089047   18263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jxx3yk.rxs58csbbf15mjb4 \
	I0319 19:07:14.089147   18263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 19:07:14.089167   18263 cni.go:84] Creating CNI manager for ""
	I0319 19:07:14.089178   18263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:07:14.090861   18263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 19:07:14.092208   18263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 19:07:14.107842   18263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 19:07:14.159266   18263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 19:07:14.159330   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:14.159358   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-630101 minikube.k8s.io/updated_at=2024_03_19T19_07_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=addons-630101 minikube.k8s.io/primary=true
	I0319 19:07:14.409285   18263 ops.go:34] apiserver oom_adj: -16
	I0319 19:07:14.419862   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:14.919949   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:15.420349   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:15.920060   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:16.419916   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:16.920650   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:17.420638   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:17.920369   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:18.419874   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:18.920749   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:19.420740   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:19.920177   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:20.420055   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:20.920368   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:21.419990   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:21.920870   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:22.420617   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:22.919983   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:23.420057   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:23.920004   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:24.420441   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:24.920784   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:25.420359   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:25.920095   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:26.420762   18263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:07:26.512821   18263 kubeadm.go:1107] duration metric: took 12.35355704s to wait for elevateKubeSystemPrivileges
	W0319 19:07:26.512871   18263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 19:07:26.512880   18263 kubeadm.go:393] duration metric: took 23.085828962s to StartCluster
	I0319 19:07:26.512895   18263 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:26.513026   18263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:07:26.513436   18263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:07:26.513697   18263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:07:26.513724   18263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 19:07:26.515875   18263 out.go:177] * Verifying Kubernetes components...
	I0319 19:07:26.513767   18263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0319 19:07:26.514411   18263 config.go:182] Loaded profile config "addons-630101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:07:26.517441   18263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:07:26.517480   18263 addons.go:69] Setting cloud-spanner=true in profile "addons-630101"
	I0319 19:07:26.517503   18263 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-630101"
	I0319 19:07:26.517511   18263 addons.go:69] Setting ingress-dns=true in profile "addons-630101"
	I0319 19:07:26.517511   18263 addons.go:69] Setting yakd=true in profile "addons-630101"
	I0319 19:07:26.517530   18263 addons.go:234] Setting addon cloud-spanner=true in "addons-630101"
	I0319 19:07:26.517528   18263 addons.go:69] Setting helm-tiller=true in profile "addons-630101"
	I0319 19:07:26.517549   18263 addons.go:234] Setting addon ingress-dns=true in "addons-630101"
	I0319 19:07:26.517553   18263 addons.go:234] Setting addon yakd=true in "addons-630101"
	I0319 19:07:26.517559   18263 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-630101"
	I0319 19:07:26.517559   18263 addons.go:234] Setting addon helm-tiller=true in "addons-630101"
	I0319 19:07:26.517569   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.517571   18263 addons.go:69] Setting gcp-auth=true in profile "addons-630101"
	I0319 19:07:26.517585   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.517587   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.517590   18263 addons.go:69] Setting ingress=true in profile "addons-630101"
	I0319 19:07:26.517599   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.517603   18263 mustload.go:65] Loading cluster: addons-630101
	I0319 19:07:26.517588   18263 addons.go:69] Setting registry=true in profile "addons-630101"
	I0319 19:07:26.517619   18263 addons.go:234] Setting addon ingress=true in "addons-630101"
	I0319 19:07:26.517644   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.517647   18263 addons.go:234] Setting addon registry=true in "addons-630101"
	I0319 19:07:26.517717   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.517866   18263 config.go:182] Loaded profile config "addons-630101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:07:26.518051   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518051   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518068   18263 addons.go:69] Setting storage-provisioner=true in profile "addons-630101"
	I0319 19:07:26.517586   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.518091   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518091   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518094   18263 addons.go:234] Setting addon storage-provisioner=true in "addons-630101"
	I0319 19:07:26.518101   18263 addons.go:69] Setting metrics-server=true in profile "addons-630101"
	I0319 19:07:26.518108   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.517496   18263 addons.go:69] Setting default-storageclass=true in profile "addons-630101"
	I0319 19:07:26.518122   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.518127   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518129   18263 addons.go:234] Setting addon metrics-server=true in "addons-630101"
	I0319 19:07:26.518132   18263 addons.go:69] Setting volumesnapshots=true in profile "addons-630101"
	I0319 19:07:26.518137   18263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-630101"
	I0319 19:07:26.518152   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.518165   18263 addons.go:234] Setting addon volumesnapshots=true in "addons-630101"
	I0319 19:07:26.518191   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518210   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518056   18263 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-630101"
	I0319 19:07:26.518278   18263 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-630101"
	I0319 19:07:26.518327   18263 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-630101"
	I0319 19:07:26.518359   18263 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-630101"
	I0319 19:07:26.518097   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518110   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518423   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518094   18263 addons.go:69] Setting inspektor-gadget=true in profile "addons-630101"
	I0319 19:07:26.518453   18263 addons.go:234] Setting addon inspektor-gadget=true in "addons-630101"
	I0319 19:07:26.518123   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518608   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518624   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518674   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518685   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518697   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518714   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518756   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518766   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.518783   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518790   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.518794   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518797   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518809   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.518862   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.518865   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.519120   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.519157   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.539126   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45545
	I0319 19:07:26.539371   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0319 19:07:26.539518   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0319 19:07:26.539842   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.539977   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.540182   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.540541   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0319 19:07:26.540617   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.540630   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.540638   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.540661   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.541427   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.541502   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.541736   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.542090   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.542128   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.542196   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.542232   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.542770   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.542770   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.543406   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.543457   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.543492   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I0319 19:07:26.543480   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.543598   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.543830   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.544345   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.544518   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.544747   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.544754   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.544785   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.544993   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.545095   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.545656   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.545699   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.555962   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.555991   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.556111   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0319 19:07:26.556302   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.556592   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.556683   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.557284   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.557307   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.557351   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.557404   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.557774   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.558355   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.558391   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.580724   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37065
	I0319 19:07:26.581355   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.581481   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0319 19:07:26.582080   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.582097   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.582495   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.583105   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.583128   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.583320   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0319 19:07:26.583837   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.584427   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.584446   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.585086   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.585651   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I0319 19:07:26.585843   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.585858   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.585952   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0319 19:07:26.586055   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.586427   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.586515   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.586541   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.586902   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.587008   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.587021   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.587022   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.587058   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0319 19:07:26.587100   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0319 19:07:26.587395   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.587553   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.588076   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.588169   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.588327   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.588386   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.588445   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0319 19:07:26.589269   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.589307   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.589507   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.589734   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.591905   18263 out.go:177]   - Using image docker.io/registry:2.8.3
	I0319 19:07:26.590516   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0319 19:07:26.591904   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.590644   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.591046   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.591096   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0319 19:07:26.591271   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.592521   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.593110   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34387
	I0319 19:07:26.594828   18263 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0319 19:07:26.593510   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.593575   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.593752   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.593811   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.594001   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.594225   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.595741   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I0319 19:07:26.596034   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.596075   18263 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0319 19:07:26.597074   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0319 19:07:26.597097   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.597103   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.596313   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.596343   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.596377   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.597185   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.597503   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0319 19:07:26.597528   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.597535   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.597857   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.597986   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.598387   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.598919   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.599808   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.600106   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0319 19:07:26.600244   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.599866   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.599857   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0319 19:07:26.600692   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.600728   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.601114   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.601924   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.601977   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.601983   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.602026   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.601890   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0319 19:07:26.603520   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0319 19:07:26.602407   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.602705   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.603170   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.603342   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.603348   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.603794   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.606367   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0319 19:07:26.605285   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.605400   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.605444   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.605451   18263 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-630101"
	I0319 19:07:26.605830   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.605883   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.605937   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.607667   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.608308   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.608836   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.608844   18263 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0319 19:07:26.608861   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.608871   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.610156   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0319 19:07:26.610293   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.610470   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.611396   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.611762   18263 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0319 19:07:26.612451   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.614957   18263 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0319 19:07:26.613214   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0319 19:07:26.613524   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.617307   18263 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0319 19:07:26.618370   18263 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0319 19:07:26.619757   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0319 19:07:26.618526   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0319 19:07:26.619939   18263 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0319 19:07:26.620866   18263 addons.go:234] Setting addon default-storageclass=true in "addons-630101"
	I0319 19:07:26.621130   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:26.621507   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.621543   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.621770   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.621863   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0319 19:07:26.621872   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0319 19:07:26.621886   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.621919   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0319 19:07:26.621931   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.623473   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0319 19:07:26.623597   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0319 19:07:26.623995   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.624519   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.624544   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.624624   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.624940   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.625174   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.626037   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.626059   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.626387   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.626565   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.627397   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0319 19:07:26.627730   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.630161   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.630170   18263 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0319 19:07:26.630187   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.631885   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.631901   18263 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0319 19:07:26.631915   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0319 19:07:26.629501   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0319 19:07:26.631933   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.631915   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.628211   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.629588   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0319 19:07:26.632001   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.630815   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.630835   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.632027   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.630991   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.632289   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.632338   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.632559   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.632585   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.632617   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.632630   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.632748   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.632749   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.632768   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.632764   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.633374   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.633455   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.633659   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.634092   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.634132   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.634492   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.634510   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.634655   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.634668   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.634668   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.634855   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.636549   18263 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0319 19:07:26.635183   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.635391   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.635857   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.635870   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.636338   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.636950   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0319 19:07:26.637980   18263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 19:07:26.637992   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 19:07:26.638010   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.638274   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.638295   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.638361   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.638464   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.638563   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.638676   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.638957   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.639180   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0319 19:07:26.639486   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.639635   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.640018   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.640032   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.640279   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.640496   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.640510   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.640622   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.640913   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.641424   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.641491   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.641672   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.642472   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.644607   18263 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0319 19:07:26.644623   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.646129   18263 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0319 19:07:26.646141   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0319 19:07:26.646157   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.643613   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.643844   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.644220   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0319 19:07:26.644568   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.642957   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.646342   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.646364   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.646702   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.646730   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0319 19:07:26.648144   18263 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0319 19:07:26.646854   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.647170   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.647288   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.649041   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.649365   18263 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0319 19:07:26.649373   18263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 19:07:26.649527   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.650507   18263 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0319 19:07:26.650581   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.651745   18263 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0319 19:07:26.651754   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0319 19:07:26.650660   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0319 19:07:26.651772   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.651755   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.651774   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.649550   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.653197   18263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:07:26.653213   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 19:07:26.653229   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.651182   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.653262   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.651114   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.653279   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.651976   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.653684   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.653876   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.654303   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.654333   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.654666   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.655520   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.655872   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.656135   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.656153   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.656330   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.656509   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.656630   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.656666   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.656708   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.656748   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.657065   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.657100   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.657247   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.657305   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.657450   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.657595   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.657785   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.658029   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.658045   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.658071   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.658207   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.658338   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.658499   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.658601   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.660403   18263 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0319 19:07:26.662220   18263 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0319 19:07:26.662232   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0319 19:07:26.662243   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.660640   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0319 19:07:26.662750   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.662815   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0319 19:07:26.663123   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.663262   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.663277   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.663609   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.663633   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.663979   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.664151   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.664235   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.664782   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:26.664820   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:26.665853   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.667616   18263 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0319 19:07:26.666202   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.666589   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.668925   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.668955   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.668991   18263 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0319 19:07:26.669015   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0319 19:07:26.669037   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.669090   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.669616   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.669769   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.673029   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.673472   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.673494   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.673629   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.673764   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.673903   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.674011   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.680939   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0319 19:07:26.681329   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.681761   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.681778   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.682105   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.682296   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.682471   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0319 19:07:26.682784   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:26.683344   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:26.683366   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:26.683682   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:26.683933   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:26.684078   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.686184   18263 out.go:177]   - Using image docker.io/busybox:stable
	I0319 19:07:26.685284   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:26.687836   18263 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0319 19:07:26.689311   18263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0319 19:07:26.689334   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0319 19:07:26.689351   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.688131   18263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 19:07:26.689418   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 19:07:26.689430   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:26.692585   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.692913   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.692945   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.693082   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.693128   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.693226   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.693369   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.693506   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:26.693631   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:26.693660   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:26.693785   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:26.693930   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:26.694090   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:26.694214   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:27.019389   18263 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0319 19:07:27.019420   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0319 19:07:27.058112   18263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:07:27.059798   18263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 19:07:27.081599   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0319 19:07:27.090452   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0319 19:07:27.138450   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0319 19:07:27.147944   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 19:07:27.149922   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0319 19:07:27.180293   18263 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0319 19:07:27.180315   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0319 19:07:27.183602   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0319 19:07:27.199883   18263 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0319 19:07:27.199908   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0319 19:07:27.226767   18263 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0319 19:07:27.226789   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0319 19:07:27.247770   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0319 19:07:27.247797   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0319 19:07:27.311314   18263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 19:07:27.311333   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0319 19:07:27.320166   18263 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0319 19:07:27.320196   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0319 19:07:27.384870   18263 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0319 19:07:27.384902   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0319 19:07:27.403911   18263 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0319 19:07:27.403935   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0319 19:07:27.405015   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0319 19:07:27.431326   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:07:27.444932   18263 node_ready.go:35] waiting up to 6m0s for node "addons-630101" to be "Ready" ...
	I0319 19:07:27.448769   18263 node_ready.go:49] node "addons-630101" has status "Ready":"True"
	I0319 19:07:27.448790   18263 node_ready.go:38] duration metric: took 3.833826ms for node "addons-630101" to be "Ready" ...
	I0319 19:07:27.448798   18263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:07:27.455987   18263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ftlmb" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:27.486027   18263 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0319 19:07:27.486052   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0319 19:07:27.653778   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0319 19:07:27.653800   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0319 19:07:27.790921   18263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 19:07:27.790949   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 19:07:27.803728   18263 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0319 19:07:27.803747   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0319 19:07:27.812674   18263 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0319 19:07:27.812694   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0319 19:07:27.837306   18263 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0319 19:07:27.837329   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0319 19:07:27.886636   18263 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0319 19:07:27.886656   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0319 19:07:28.000427   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0319 19:07:28.000448   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0319 19:07:28.056624   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0319 19:07:28.071559   18263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 19:07:28.071581   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 19:07:28.093901   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0319 19:07:28.093921   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0319 19:07:28.097810   18263 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0319 19:07:28.097830   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0319 19:07:28.105021   18263 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0319 19:07:28.105038   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0319 19:07:28.325012   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0319 19:07:28.325033   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0319 19:07:28.466503   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 19:07:28.619273   18263 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0319 19:07:28.619293   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0319 19:07:28.682181   18263 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0319 19:07:28.682205   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0319 19:07:28.718340   18263 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0319 19:07:28.718368   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0319 19:07:28.739301   18263 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0319 19:07:28.739322   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0319 19:07:28.842622   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0319 19:07:28.987046   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0319 19:07:29.066805   18263 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0319 19:07:29.066827   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0319 19:07:29.106941   18263 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0319 19:07:29.106963   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0319 19:07:29.366016   18263 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0319 19:07:29.366036   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0319 19:07:29.466651   18263 pod_ready.go:102] pod "coredns-76f75df574-ftlmb" in "kube-system" namespace has status "Ready":"False"
	I0319 19:07:29.519460   18263 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0319 19:07:29.519478   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0319 19:07:29.815746   18263 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0319 19:07:29.815772   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0319 19:07:29.950329   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0319 19:07:30.008230   18263 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.948393067s)
	I0319 19:07:30.008284   18263 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0319 19:07:30.139542   18263 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0319 19:07:30.139565   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0319 19:07:30.519125   18263 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-630101" context rescaled to 1 replicas
	I0319 19:07:30.522371   18263 pod_ready.go:92] pod "coredns-76f75df574-ftlmb" in "kube-system" namespace has status "Ready":"True"
	I0319 19:07:30.522396   18263 pod_ready.go:81] duration metric: took 3.066384658s for pod "coredns-76f75df574-ftlmb" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:30.522409   18263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tjmcc" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:30.635894   18263 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0319 19:07:30.635915   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0319 19:07:31.088885   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0319 19:07:32.576361   18263 pod_ready.go:102] pod "coredns-76f75df574-tjmcc" in "kube-system" namespace has status "Ready":"False"
	I0319 19:07:33.437776   18263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0319 19:07:33.437815   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:33.441347   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:33.441816   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:33.441858   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:33.442108   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:33.442334   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:33.442519   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:33.442661   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:33.948798   18263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0319 19:07:34.027721   18263 addons.go:234] Setting addon gcp-auth=true in "addons-630101"
	I0319 19:07:34.027767   18263 host.go:66] Checking if "addons-630101" exists ...
	I0319 19:07:34.028068   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:34.028098   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:34.043473   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0319 19:07:34.043898   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:34.044422   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:34.044446   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:34.044738   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:34.045182   18263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:07:34.045207   18263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:07:34.060491   18263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39847
	I0319 19:07:34.060892   18263 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:07:34.061397   18263 main.go:141] libmachine: Using API Version  1
	I0319 19:07:34.061421   18263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:07:34.061747   18263 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:07:34.061955   18263 main.go:141] libmachine: (addons-630101) Calling .GetState
	I0319 19:07:34.063636   18263 main.go:141] libmachine: (addons-630101) Calling .DriverName
	I0319 19:07:34.063914   18263 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0319 19:07:34.063934   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHHostname
	I0319 19:07:34.066630   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:34.067004   18263 main.go:141] libmachine: (addons-630101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:1a:da", ip: ""} in network mk-addons-630101: {Iface:virbr1 ExpiryTime:2024-03-19 20:06:48 +0000 UTC Type:0 Mac:52:54:00:8b:1a:da Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-630101 Clientid:01:52:54:00:8b:1a:da}
	I0319 19:07:34.067032   18263 main.go:141] libmachine: (addons-630101) DBG | domain addons-630101 has defined IP address 192.168.39.203 and MAC address 52:54:00:8b:1a:da in network mk-addons-630101
	I0319 19:07:34.067184   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHPort
	I0319 19:07:34.067367   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHKeyPath
	I0319 19:07:34.067533   18263 main.go:141] libmachine: (addons-630101) Calling .GetSSHUsername
	I0319 19:07:34.067711   18263 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/addons-630101/id_rsa Username:docker}
	I0319 19:07:35.129768   18263 pod_ready.go:102] pod "coredns-76f75df574-tjmcc" in "kube-system" namespace has status "Ready":"False"
	I0319 19:07:36.573905   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.492268334s)
	I0319 19:07:36.573957   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.573970   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.573988   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.435514395s)
	I0319 19:07:36.573956   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.483472519s)
	I0319 19:07:36.574044   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.426077083s)
	I0319 19:07:36.574049   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574059   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574074   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574084   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574024   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574119   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574137   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.424187876s)
	I0319 19:07:36.574156   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.390526938s)
	I0319 19:07:36.574168   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574172   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574178   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574180   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574224   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.169177403s)
	I0319 19:07:36.574248   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574264   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574262   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.142910961s)
	I0319 19:07:36.574306   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574314   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574315   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.517662158s)
	I0319 19:07:36.574364   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574373   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574421   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.107884289s)
	I0319 19:07:36.574438   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574447   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574573   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.574589   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.574608   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.574633   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.574636   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.624273947s)
	I0319 19:07:36.574608   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.574652   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.574657   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574661   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574667   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574670   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574673   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.574670   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.731825908s)
	I0319 19:07:36.574682   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.574690   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574698   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574700   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574640   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.574714   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574717   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574724   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574741   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.574759   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.574766   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.574768   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.574774   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574781   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574793   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.574801   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.574809   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.574816   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.574578   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.587502793s)
	W0319 19:07:36.575026   18263 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0319 19:07:36.575079   18263 retry.go:31] will retry after 347.042416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0319 19:07:36.575113   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.575139   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.575153   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.575167   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.575181   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.575193   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.575220   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.575245   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.575263   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.575555   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.575572   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.575588   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.575639   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.575681   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.575697   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.576216   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.576243   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576249   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.575277   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.575292   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576320   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.576331   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.576338   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.576406   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576413   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.576422   18263 addons.go:470] Verifying addon ingress=true in "addons-630101"
	I0319 19:07:36.578024   18263 out.go:177] * Verifying ingress addon...
	I0319 19:07:36.576646   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576758   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.576780   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576802   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.576818   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576831   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.576847   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.576882   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.576901   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.577908   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.577941   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.579011   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.579568   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.579724   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.579741   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.579749   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.579758   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.579768   18263 addons.go:470] Verifying addon metrics-server=true in "addons-630101"
	I0319 19:07:36.579748   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.579826   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.579835   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.579843   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.580170   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.579035   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.580251   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.580284   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.580306   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.581996   18263 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-630101 service yakd-dashboard -n yakd-dashboard
	
	I0319 19:07:36.580207   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.580490   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.580628   18263 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0319 19:07:36.580681   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.581141   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.581162   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.581721   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.581742   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.583617   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.583628   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.583631   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.583687   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.583742   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.583759   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.583768   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.583944   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.583995   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.584085   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.584088   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.584096   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.584104   18263 addons.go:470] Verifying addon registry=true in "addons-630101"
	I0319 19:07:36.585457   18263 out.go:177] * Verifying registry addon...
	I0319 19:07:36.586986   18263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0319 19:07:36.610257   18263 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0319 19:07:36.610279   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:36.620732   18263 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0319 19:07:36.620749   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:36.634554   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.634580   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.634916   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.634935   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	W0319 19:07:36.635018   18263 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0319 19:07:36.638133   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:36.638154   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:36.638492   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:36.638518   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:36.638531   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:36.923023   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0319 19:07:37.090282   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:37.094787   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:37.551336   18263 pod_ready.go:102] pod "coredns-76f75df574-tjmcc" in "kube-system" namespace has status "Ready":"False"
	I0319 19:07:37.560288   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.471349731s)
	I0319 19:07:37.560342   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:37.560358   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:37.560371   18263 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.4964351s)
	I0319 19:07:37.562172   18263 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0319 19:07:37.560628   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:37.560651   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:37.562206   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:37.563502   18263 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0319 19:07:37.565037   18263 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0319 19:07:37.565051   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0319 19:07:37.563518   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:37.565127   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:37.565404   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:37.565413   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:37.565424   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:37.565443   18263 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-630101"
	I0319 19:07:37.566819   18263 out.go:177] * Verifying csi-hostpath-driver addon...
	I0319 19:07:37.568615   18263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0319 19:07:37.579137   18263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0319 19:07:37.579152   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:37.616436   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:37.621670   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:37.694138   18263 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0319 19:07:37.694158   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0319 19:07:37.829760   18263 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0319 19:07:37.829781   18263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0319 19:07:37.907194   18263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0319 19:07:38.074771   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:38.088008   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:38.092467   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:38.587363   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:38.593752   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:38.595863   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:39.092492   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:39.102505   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:39.102767   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:39.592850   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:39.602414   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:39.602454   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:39.602976   18263 pod_ready.go:102] pod "coredns-76f75df574-tjmcc" in "kube-system" namespace has status "Ready":"False"
	I0319 19:07:39.753677   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.830607696s)
	I0319 19:07:39.753720   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:39.753734   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:39.753784   18263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.846559166s)
	I0319 19:07:39.753820   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:39.753828   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:39.753991   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:39.754004   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:39.754013   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:39.754019   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:39.754303   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:39.754304   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:39.754324   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:39.754332   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:39.754338   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:39.754341   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:39.754348   18263 main.go:141] libmachine: Making call to close driver server
	I0319 19:07:39.754356   18263 main.go:141] libmachine: (addons-630101) Calling .Close
	I0319 19:07:39.754607   18263 main.go:141] libmachine: (addons-630101) DBG | Closing plugin on server side
	I0319 19:07:39.754625   18263 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:07:39.754637   18263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:07:39.756650   18263 addons.go:470] Verifying addon gcp-auth=true in "addons-630101"
	I0319 19:07:39.758174   18263 out.go:177] * Verifying gcp-auth addon...
	I0319 19:07:39.760311   18263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0319 19:07:39.764534   18263 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0319 19:07:39.764551   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:40.075648   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:40.088730   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:40.091638   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:40.273038   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:40.526090   18263 pod_ready.go:97] error getting pod "coredns-76f75df574-tjmcc" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-tjmcc" not found
	I0319 19:07:40.526113   18263 pod_ready.go:81] duration metric: took 10.003695141s for pod "coredns-76f75df574-tjmcc" in "kube-system" namespace to be "Ready" ...
	E0319 19:07:40.526122   18263 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-tjmcc" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-tjmcc" not found
	I0319 19:07:40.526128   18263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.537574   18263 pod_ready.go:92] pod "etcd-addons-630101" in "kube-system" namespace has status "Ready":"True"
	I0319 19:07:40.537594   18263 pod_ready.go:81] duration metric: took 11.459985ms for pod "etcd-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.537602   18263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.551791   18263 pod_ready.go:92] pod "kube-apiserver-addons-630101" in "kube-system" namespace has status "Ready":"True"
	I0319 19:07:40.551809   18263 pod_ready.go:81] duration metric: took 14.200911ms for pod "kube-apiserver-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.551817   18263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.564195   18263 pod_ready.go:92] pod "kube-controller-manager-addons-630101" in "kube-system" namespace has status "Ready":"True"
	I0319 19:07:40.564215   18263 pod_ready.go:81] duration metric: took 12.392336ms for pod "kube-controller-manager-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.564224   18263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7zcm" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.576871   18263 pod_ready.go:92] pod "kube-proxy-v7zcm" in "kube-system" namespace has status "Ready":"True"
	I0319 19:07:40.576889   18263 pod_ready.go:81] duration metric: took 12.659332ms for pod "kube-proxy-v7zcm" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.576902   18263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.585718   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:40.589942   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:40.592085   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:40.727918   18263 pod_ready.go:92] pod "kube-scheduler-addons-630101" in "kube-system" namespace has status "Ready":"True"
	I0319 19:07:40.727940   18263 pod_ready.go:81] duration metric: took 151.031755ms for pod "kube-scheduler-addons-630101" in "kube-system" namespace to be "Ready" ...
	I0319 19:07:40.727948   18263 pod_ready.go:38] duration metric: took 13.279140878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:07:40.727963   18263 api_server.go:52] waiting for apiserver process to appear ...
	I0319 19:07:40.728006   18263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:07:40.764188   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:40.774495   18263 api_server.go:72] duration metric: took 14.26075911s to wait for apiserver process to appear ...
	I0319 19:07:40.774510   18263 api_server.go:88] waiting for apiserver healthz status ...
	I0319 19:07:40.774527   18263 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0319 19:07:40.780596   18263 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0319 19:07:40.781638   18263 api_server.go:141] control plane version: v1.29.3
	I0319 19:07:40.781658   18263 api_server.go:131] duration metric: took 7.142157ms to wait for apiserver health ...
	I0319 19:07:40.781665   18263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 19:07:40.934017   18263 system_pods.go:59] 18 kube-system pods found
	I0319 19:07:40.934045   18263 system_pods.go:61] "coredns-76f75df574-ftlmb" [962c919c-5144-4459-bac9-eb608143b937] Running
	I0319 19:07:40.934053   18263 system_pods.go:61] "csi-hostpath-attacher-0" [541fcb40-519e-4104-a00a-57eb81521b9e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0319 19:07:40.934059   18263 system_pods.go:61] "csi-hostpath-resizer-0" [35661e19-b0f1-49c6-93fc-7149b160f91e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0319 19:07:40.934067   18263 system_pods.go:61] "csi-hostpathplugin-pz87t" [a9073ade-2b9c-4207-8340-54a20004f6bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0319 19:07:40.934071   18263 system_pods.go:61] "etcd-addons-630101" [a690ea60-f254-42a7-af5b-072da6249d70] Running
	I0319 19:07:40.934075   18263 system_pods.go:61] "kube-apiserver-addons-630101" [96c20ae2-77dd-4b5c-bc39-ffd118f41801] Running
	I0319 19:07:40.934078   18263 system_pods.go:61] "kube-controller-manager-addons-630101" [4300aa35-c48b-489e-b28c-67fecc74dead] Running
	I0319 19:07:40.934083   18263 system_pods.go:61] "kube-ingress-dns-minikube" [da5b9f55-8583-40af-8c8c-cbf974077352] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0319 19:07:40.934086   18263 system_pods.go:61] "kube-proxy-v7zcm" [d2374b11-f4d7-4083-95bc-6ce5f9d0505b] Running
	I0319 19:07:40.934089   18263 system_pods.go:61] "kube-scheduler-addons-630101" [6e88da1e-7104-4fbf-b4d0-fbf6702ee528] Running
	I0319 19:07:40.934093   18263 system_pods.go:61] "metrics-server-69cf46c98-rxmfc" [ebb99aee-ec48-4d22-a827-17b63f98c4fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 19:07:40.934098   18263 system_pods.go:61] "nvidia-device-plugin-daemonset-ld4j7" [0bb3ac27-4dd0-4ffc-8d11-225f4858d40d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0319 19:07:40.934103   18263 system_pods.go:61] "registry-5c2dl" [33e86949-d2bb-4ead-9b37-bdeedecabf55] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0319 19:07:40.934108   18263 system_pods.go:61] "registry-proxy-9hbsf" [6c3ae126-cbbe-4d86-990a-82e1182780db] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0319 19:07:40.934114   18263 system_pods.go:61] "snapshot-controller-58dbcc7b99-9jplf" [8a0f5344-33cc-4a6d-8f9d-acf38bcd3f8c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0319 19:07:40.934122   18263 system_pods.go:61] "snapshot-controller-58dbcc7b99-g5bqt" [d7f59a23-aa68-4729-87b1-0c2c9c33a893] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0319 19:07:40.934126   18263 system_pods.go:61] "storage-provisioner" [4f195742-cbef-4d02-8551-5450578fe305] Running
	I0319 19:07:40.934131   18263 system_pods.go:61] "tiller-deploy-7b677967b9-pjgds" [b869828d-8013-4fb0-96fb-36e7be67a2d9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0319 19:07:40.934140   18263 system_pods.go:74] duration metric: took 152.470613ms to wait for pod list to return data ...
	I0319 19:07:40.934147   18263 default_sa.go:34] waiting for default service account to be created ...
	I0319 19:07:41.075235   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:41.088527   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:41.091648   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:41.126394   18263 default_sa.go:45] found service account: "default"
	I0319 19:07:41.126419   18263 default_sa.go:55] duration metric: took 192.266477ms for default service account to be created ...
	I0319 19:07:41.126427   18263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 19:07:41.264057   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:41.513376   18263 system_pods.go:86] 18 kube-system pods found
	I0319 19:07:41.513409   18263 system_pods.go:89] "coredns-76f75df574-ftlmb" [962c919c-5144-4459-bac9-eb608143b937] Running
	I0319 19:07:41.513417   18263 system_pods.go:89] "csi-hostpath-attacher-0" [541fcb40-519e-4104-a00a-57eb81521b9e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0319 19:07:41.513424   18263 system_pods.go:89] "csi-hostpath-resizer-0" [35661e19-b0f1-49c6-93fc-7149b160f91e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0319 19:07:41.513432   18263 system_pods.go:89] "csi-hostpathplugin-pz87t" [a9073ade-2b9c-4207-8340-54a20004f6bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0319 19:07:41.513437   18263 system_pods.go:89] "etcd-addons-630101" [a690ea60-f254-42a7-af5b-072da6249d70] Running
	I0319 19:07:41.513442   18263 system_pods.go:89] "kube-apiserver-addons-630101" [96c20ae2-77dd-4b5c-bc39-ffd118f41801] Running
	I0319 19:07:41.513446   18263 system_pods.go:89] "kube-controller-manager-addons-630101" [4300aa35-c48b-489e-b28c-67fecc74dead] Running
	I0319 19:07:41.513454   18263 system_pods.go:89] "kube-ingress-dns-minikube" [da5b9f55-8583-40af-8c8c-cbf974077352] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0319 19:07:41.513459   18263 system_pods.go:89] "kube-proxy-v7zcm" [d2374b11-f4d7-4083-95bc-6ce5f9d0505b] Running
	I0319 19:07:41.513464   18263 system_pods.go:89] "kube-scheduler-addons-630101" [6e88da1e-7104-4fbf-b4d0-fbf6702ee528] Running
	I0319 19:07:41.513469   18263 system_pods.go:89] "metrics-server-69cf46c98-rxmfc" [ebb99aee-ec48-4d22-a827-17b63f98c4fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 19:07:41.513477   18263 system_pods.go:89] "nvidia-device-plugin-daemonset-ld4j7" [0bb3ac27-4dd0-4ffc-8d11-225f4858d40d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0319 19:07:41.513490   18263 system_pods.go:89] "registry-5c2dl" [33e86949-d2bb-4ead-9b37-bdeedecabf55] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0319 19:07:41.513500   18263 system_pods.go:89] "registry-proxy-9hbsf" [6c3ae126-cbbe-4d86-990a-82e1182780db] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0319 19:07:41.513519   18263 system_pods.go:89] "snapshot-controller-58dbcc7b99-9jplf" [8a0f5344-33cc-4a6d-8f9d-acf38bcd3f8c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0319 19:07:41.513529   18263 system_pods.go:89] "snapshot-controller-58dbcc7b99-g5bqt" [d7f59a23-aa68-4729-87b1-0c2c9c33a893] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0319 19:07:41.513537   18263 system_pods.go:89] "storage-provisioner" [4f195742-cbef-4d02-8551-5450578fe305] Running
	I0319 19:07:41.513553   18263 system_pods.go:89] "tiller-deploy-7b677967b9-pjgds" [b869828d-8013-4fb0-96fb-36e7be67a2d9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0319 19:07:41.513568   18263 system_pods.go:126] duration metric: took 387.131439ms to wait for k8s-apps to be running ...
	I0319 19:07:41.513576   18263 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 19:07:41.513615   18263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:07:41.536326   18263 system_svc.go:56] duration metric: took 22.737576ms WaitForService to wait for kubelet
	I0319 19:07:41.536362   18263 kubeadm.go:576] duration metric: took 15.022626098s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:07:41.536395   18263 node_conditions.go:102] verifying NodePressure condition ...
	I0319 19:07:41.543862   18263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:07:41.543891   18263 node_conditions.go:123] node cpu capacity is 2
	I0319 19:07:41.543902   18263 node_conditions.go:105] duration metric: took 7.502087ms to run NodePressure ...
	I0319 19:07:41.543912   18263 start.go:240] waiting for startup goroutines ...
	I0319 19:07:41.574524   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:41.587463   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:41.591996   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:41.767379   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:42.075740   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:42.091603   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:42.098260   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:42.268092   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:42.574572   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:42.588939   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:42.593867   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:42.765070   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:43.074681   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:43.089828   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:43.091261   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:43.266030   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:43.575468   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:43.593068   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:43.603401   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:43.767277   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:44.075079   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:44.088238   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:44.091341   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:44.264978   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:44.580935   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:44.588334   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:44.591305   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:44.764361   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:45.075509   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:45.088809   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:45.092240   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:45.264091   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:45.575176   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:45.592948   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:45.594262   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:45.766122   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:46.075877   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:46.087940   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:46.090549   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:46.264704   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:46.574639   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:46.587441   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:46.591479   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:46.764216   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:47.074869   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:47.087850   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:47.091139   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:47.265472   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:47.578661   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:47.587776   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:47.591804   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:47.764605   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:48.074832   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:48.090765   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:48.091157   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:48.268466   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:48.577476   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:48.589909   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:48.595622   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:48.767947   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:49.075054   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:49.088536   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:49.092473   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:49.265189   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:49.574516   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:49.587530   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:49.591910   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:49.764375   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:50.076089   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:50.088780   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:50.091617   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:50.265922   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:50.575225   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:50.589129   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:50.592023   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:50.768026   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:51.078610   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:51.089003   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:51.091833   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:51.263906   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:51.585496   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:51.594971   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:51.596575   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:51.763529   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:52.077042   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:52.089088   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:52.093591   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:52.264474   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:52.580392   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:52.588188   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:52.591334   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:52.763896   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:53.074346   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:53.090153   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:53.094874   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:53.264179   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:53.575213   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:53.594119   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:53.600222   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:53.764981   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:54.075912   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:54.092238   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:54.096789   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:54.264699   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:54.578685   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:54.588781   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:54.595704   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:54.766082   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:55.074817   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:55.091926   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:55.094861   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:55.263852   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:55.574983   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:55.588815   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:55.591687   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:55.764725   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:56.074641   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:56.088718   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:56.092198   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:56.264844   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:56.574557   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:56.587250   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:56.590793   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:56.766363   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:57.075474   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:57.087271   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:57.090784   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:57.264789   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:57.575281   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:57.588439   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:57.591982   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:57.765636   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:58.074201   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:58.096920   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:58.101279   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:58.265883   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:58.576972   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:58.588051   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:58.591090   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:58.765677   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:59.082429   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:59.089855   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:59.094280   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:59.265244   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:07:59.575786   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:07:59.588138   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:07:59.592665   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:07:59.767711   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:00.075087   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:00.090681   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:00.092491   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:00.265210   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:00.575589   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:00.587968   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:00.592361   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:00.765479   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:01.075976   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:01.088138   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:01.094668   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:01.263858   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:01.577544   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:01.587782   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:01.591914   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:01.765451   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:02.075793   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:02.088130   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:02.091956   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:02.264156   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:02.574961   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:02.588941   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:02.591480   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:02.764980   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:03.075080   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:03.088646   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:03.092005   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:03.264119   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:03.574919   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:03.591087   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:03.597430   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:03.767552   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:04.075358   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:04.088140   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:04.091008   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:04.264426   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:04.582952   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:04.589896   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:04.598222   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:04.764325   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:05.075651   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:05.088163   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:05.091540   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:05.265803   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:05.573927   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:05.588044   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:05.596909   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:05.764778   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:06.074889   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:06.089725   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:06.091610   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:06.264070   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:06.575573   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:06.587629   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:06.592120   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:06.764574   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:07.075460   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:07.088695   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:07.092300   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:07.264238   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:07.575717   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:07.587511   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:07.591173   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:07.763754   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:08.074016   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:08.088552   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:08.095848   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:08.264753   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:08.576365   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:08.592736   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:08.594196   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:08.764503   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:09.076237   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:09.088099   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:09.096313   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:09.265004   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:09.578195   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:09.588673   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:09.591669   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:09.763718   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:10.075603   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:10.090069   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:10.092508   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:10.265273   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:10.575461   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:10.588974   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:10.593778   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:10.765504   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:11.075165   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:11.088088   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:11.091623   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:11.263762   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:11.574861   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:11.590324   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:11.591246   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:11.763720   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:12.074931   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:12.087615   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:12.092209   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:12.266639   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:12.575467   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:12.588554   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:12.594200   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:12.764851   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:13.075521   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:13.088157   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:13.093707   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:13.267834   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:13.575807   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:13.588617   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:13.592028   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:13.764631   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:14.074596   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:14.087735   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:14.090927   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:14.265258   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:14.576011   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:14.589763   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:14.592071   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:14.765280   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:15.075192   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:15.088763   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:15.091508   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:15.771945   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:15.772138   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:15.772862   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:15.775531   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:15.782472   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:16.075521   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:16.092143   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:16.093403   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:16.265259   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:16.574565   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:16.588009   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:16.590926   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:16.764696   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:17.076417   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:17.094113   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:17.094201   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:17.264699   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:17.574296   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:17.587506   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:17.591392   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 19:08:17.770271   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:18.074948   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:18.088540   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:18.091423   18263 kapi.go:107] duration metric: took 41.504433919s to wait for kubernetes.io/minikube-addons=registry ...
	I0319 19:08:18.265643   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:18.575036   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:18.589837   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:18.764927   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:19.075246   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:19.088774   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:19.266139   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:19.575187   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:19.588379   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:19.771658   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:20.073785   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:20.089360   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:20.265925   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:20.574464   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:20.589876   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:20.764073   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:21.075810   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:21.087478   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:21.266230   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:21.574349   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:21.589755   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:21.765657   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:22.074898   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:22.087690   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:22.264365   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:22.575039   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:22.590014   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:22.764072   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:23.076047   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:23.088327   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:23.264250   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:23.575447   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:23.588369   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:23.765419   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:24.075337   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:24.088811   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:24.266521   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:24.576239   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:24.588135   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:24.764226   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:25.075563   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:25.087549   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:25.266323   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:25.575016   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:25.588404   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:25.768455   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:26.075050   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:26.088320   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:26.265210   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:26.574863   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:26.588253   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:26.765781   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:27.074395   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:27.093193   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:27.263681   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:27.575130   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:27.587949   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:27.765153   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:28.076702   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:28.088417   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:28.264787   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:28.574706   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:28.588625   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:28.765275   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:29.075578   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:29.089217   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:29.264411   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:29.575178   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:29.588538   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:29.764575   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:30.075850   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:30.089085   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:30.264670   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:30.574750   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:30.588824   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:30.765150   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:31.076381   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:31.088969   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:31.264783   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:31.575271   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:31.590858   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:31.764190   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:32.075812   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:32.087775   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:32.266846   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:32.575175   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:32.588184   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:32.765167   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:33.074715   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:33.087981   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:33.265044   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:33.574629   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:33.587671   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:33.766119   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:34.075092   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:34.089922   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:34.264046   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:34.578261   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:34.590279   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:34.764863   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:35.075113   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:35.087868   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:35.265613   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:35.575405   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:35.589575   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:35.765757   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:36.076762   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:36.090634   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:36.267603   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:36.574930   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:36.588124   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:36.764832   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:37.074513   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:37.087632   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:37.265149   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:37.576686   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:37.589525   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:37.766104   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:38.076223   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:38.088793   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:38.264864   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:38.580150   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:38.590809   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:38.765270   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:39.081643   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:39.089983   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:39.265660   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:39.575374   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:39.589491   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:39.764855   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:40.074784   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:40.087813   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:40.264875   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:40.574314   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:40.589186   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:40.765142   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:41.077333   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:41.088446   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:41.264517   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:41.573796   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:41.587335   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:41.768885   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:42.074939   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:42.098289   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:42.269972   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:42.575610   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:42.593862   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:42.766989   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:43.074130   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:43.088781   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:43.265443   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:43.574978   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:43.587902   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:43.765327   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:44.075392   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:44.087779   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:44.266226   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:44.602859   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:44.603707   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:44.764228   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:45.075506   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:45.088656   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:45.266485   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:45.575043   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:45.589994   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:45.765089   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:46.075159   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:46.088580   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:46.273567   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:46.574654   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:46.588663   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:46.764633   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:47.075849   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:47.089781   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:47.268071   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:47.577628   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:47.588326   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:47.766546   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:48.074994   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:48.089530   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:48.267279   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:48.591243   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:48.595527   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:48.764885   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:49.075562   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:49.088698   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:49.265101   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:49.576547   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:49.590442   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:49.764613   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:50.075405   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:50.089099   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:50.266174   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:50.581478   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:50.588054   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:50.766090   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:51.075313   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:51.089360   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:51.266727   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:51.575419   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:51.588967   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:51.765082   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:52.075081   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:52.089453   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:52.270688   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:52.574832   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:52.589188   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:52.765327   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:53.080050   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:53.090243   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:53.265050   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:53.576078   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:53.589282   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:53.766303   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:54.077302   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:54.088154   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:54.264604   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:54.576058   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:54.588391   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:54.765020   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:55.075592   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:55.092191   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:55.267761   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:55.575268   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 19:08:55.588583   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:55.773690   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:56.074295   18263 kapi.go:107] duration metric: took 1m18.505677399s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0319 19:08:56.090649   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:56.268272   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:56.588621   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:56.767191   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:57.089826   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:57.264642   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:57.589527   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:57.764592   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:58.090091   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:58.264245   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:58.589141   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:58.765161   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:59.089559   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:59.265357   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:08:59.593483   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:08:59.765380   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:00.089763   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:00.265127   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:00.589362   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:00.765623   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:01.089459   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:01.266399   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:01.588986   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:01.769587   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:02.090161   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:02.264046   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:02.589740   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:02.764477   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:03.089302   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:03.264687   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:03.589718   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:03.764936   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:04.088688   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:04.266128   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:04.589070   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:04.764447   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:05.089409   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:05.264793   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:05.588618   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:05.764847   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:06.088420   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:06.264469   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:06.589494   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:06.763984   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:07.088669   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:07.265335   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:07.589633   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:07.764299   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:08.090361   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:08.264692   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:08.590291   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:08.765306   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:09.088765   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:09.265243   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:09.588556   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:09.764998   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:10.089141   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:10.264242   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:10.588461   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:10.765042   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:11.088835   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:11.264950   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:11.591347   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:11.765629   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:12.089493   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:12.264754   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:12.588516   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:12.764592   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:13.088918   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:13.264902   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:13.588761   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:13.764981   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:14.088796   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:14.265614   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:14.589131   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:14.764664   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:15.088679   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:15.265249   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:15.589254   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:15.765481   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:16.088254   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:16.264804   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:16.588869   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:16.764517   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:17.089558   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:17.265883   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:17.589246   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:17.764778   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:18.088904   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:18.265633   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:18.589158   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:18.764146   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:19.087925   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:19.264372   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:19.589066   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:19.765766   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:20.088146   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:20.264881   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:20.588004   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:20.765196   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:21.088972   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:21.265091   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:21.589840   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:21.765210   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:22.089526   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:22.269705   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:22.588563   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:22.764202   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:23.089261   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:23.264413   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:23.589764   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:23.764866   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:24.090066   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:24.265105   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:24.588733   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:24.765005   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:25.088637   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:25.264843   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:25.589218   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:25.764186   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:26.089205   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:26.264477   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:26.588659   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:26.764192   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:27.090548   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:27.264657   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:27.591336   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:27.764932   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:28.088376   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:28.265088   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:28.588523   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:28.765203   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:29.089836   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:29.263960   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:29.588429   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:29.765456   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:30.089701   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:30.265241   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:30.588854   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:30.765154   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:31.089367   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:31.264991   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:31.589569   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:31.765820   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:32.088450   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:32.264161   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:32.589445   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:32.764987   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:33.089573   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:33.264840   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:33.588107   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:33.764466   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:34.089308   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:34.265243   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:34.589047   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:34.764082   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:35.088818   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:35.263755   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:35.588689   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:35.764896   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:36.088812   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:36.265200   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:36.588162   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:36.766090   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:37.089016   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:37.263915   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:37.589493   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:37.764977   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:38.089204   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:38.266638   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:38.590322   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:38.764623   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:39.089239   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:39.264696   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:39.589202   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:39.764447   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:40.088584   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:40.264276   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:40.588911   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:40.763776   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:41.088318   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:41.264774   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:41.588608   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:41.764484   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:42.089688   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:42.265354   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:42.588461   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:42.766793   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:43.088654   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:43.264871   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:43.588587   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:43.764605   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:44.089218   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:44.264356   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:44.589272   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:44.764419   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:45.091163   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:45.264093   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:45.588811   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:45.764807   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:46.088905   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:46.264309   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:46.589474   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:46.764347   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:47.089127   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:47.264379   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:47.589078   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:47.764582   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:48.089445   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:48.264752   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:48.589813   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:48.767336   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:49.089377   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:49.265139   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:49.588760   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:49.764761   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:50.089029   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:50.265263   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:50.588232   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:50.763861   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:51.088422   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:51.264755   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:51.589272   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:51.764803   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:52.088704   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:52.264662   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:52.588820   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:52.763862   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:53.089231   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:53.265758   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:53.588466   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:53.764665   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:54.089667   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:54.265404   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:54.589072   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:54.765442   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:55.089442   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:55.264665   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:55.588869   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:55.768942   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:56.088646   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:56.268444   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:56.762814   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:56.770105   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:57.089676   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:57.264922   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:57.589108   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:57.764455   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:58.088928   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:58.263933   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:58.588589   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:58.765611   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:59.089584   18263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 19:09:59.266860   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:09:59.588690   18263 kapi.go:107] duration metric: took 2m23.008058245s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0319 19:09:59.764759   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:00.265272   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:00.764631   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:01.264882   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:01.764743   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:02.264802   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:02.763998   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:03.276500   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:03.775573   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:04.264402   18263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 19:10:04.763837   18263 kapi.go:107] duration metric: took 2m25.003525015s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0319 19:10:04.765630   18263 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-630101 cluster.
	I0319 19:10:04.766799   18263 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0319 19:10:04.767906   18263 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0319 19:10:04.769232   18263 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, metrics-server, helm-tiller, yakd, ingress-dns, inspektor-gadget, cloud-spanner, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0319 19:10:04.770494   18263 addons.go:505] duration metric: took 2m38.256727427s for enable addons: enabled=[nvidia-device-plugin storage-provisioner metrics-server helm-tiller yakd ingress-dns inspektor-gadget cloud-spanner default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0319 19:10:04.770534   18263 start.go:245] waiting for cluster config update ...
	I0319 19:10:04.770551   18263 start.go:254] writing updated cluster config ...
	I0319 19:10:04.770779   18263 ssh_runner.go:195] Run: rm -f paused
	I0319 19:10:04.822820   18263 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 19:10:04.824587   18263 out.go:177] * Done! kubectl is now configured to use "addons-630101" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.495460316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710875589495430446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569952,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b4d3fce-c372-4e70-a2bf-8dd0ad982a41 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.496394454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e355a967-fadb-4d10-9157-9c582cd0a955 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.496477599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e355a967-fadb-4d10-9157-9c582cd0a955 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.496801618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14c8a2a2404d28521e5042f3ae775194ca3088e457af32518b1d75c66b9758ae,PodSandboxId:b2d4488037dfdde52c190569d611669fe7f2214ef1fce51d97fc1decfbe55031,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710875583182695502,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-hbnhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a0266f3-e5c5-4490-a45e-5844528a33ff,},Annotations:map[string]string{io.kubernetes.container.hash: c1525ccc,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1b754ab8f23082838de873cf69cf76d450fb6e1194990c90631dec44d3daf4,PodSandboxId:d6c853ee6559f37d61f777671af52a77390b9be7ac370e49dbfe83c036604ac7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710875442822472569,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64517821-4670-447e-8ddc-b3df143a2aae,},Annotations:map[string]string{io.kubern
etes.container.hash: 2d12a691,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd46188bb7a0c4d64bfc0f18af06422eb578387e969857a9f3e416a9b25c4a97,PodSandboxId:8644b7858c60fa4cd95ae01101801efe6995df6a939142426d6ae3a3664211ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710875435689725448,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-h94gd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 8e09ab7c-fd92-4587-85c9-9cf10b97e200,},Annotations:map[string]string{io.kubernetes.container.hash: 213908ac,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993115486f06573d9a7012f841fdf882e69b1c7d728c3334680cfcb29cb0058b,PodSandboxId:2a1c443001a25d3d3c6d3d1b0d5a288128edb571a45f9e3df8fd171922caf089,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710875403758680846,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-vcsmv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a2a3c0bb-5253-42de-93b7-6cfd0b372cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 4076cfd9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b8f0fb5be5fa6c8d1aeb749a6c5502f95d8b7b555d45e487a870e2273f3fe7,PodSandboxId:18660cd3bf470037e2a8e45670482c6fc5f1086585496d7ab672757a311b6f43,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710875323913965332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8lqdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1d0d039-899f-47a4-9521-814981244259,},Annotations:map[string]string{io.kubernetes.container.hash: 1b6a4133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbe1422ce33da339362bc28aa8f6218f248a8c72a6b30aacc1b726a72c95c8,PodSandboxId:493618b686e4c9bb6f8701596b035db8948033202a49850e34b8f3c0c674688f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710875320582936038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jtd7d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 821e0de3-f0d3-47cc-b8eb-f7b07314147c,},Annotations:map[string]string{io.kubernetes.container.hash: 19653035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19532dd978c3f719dfad70672bf56ac204695ff868de7e7eab6afdbab2114f1,PodSandboxId:ec3fb49406491ee85733f88ad9c018a526adfe2c1efaaae753a7b8e4d7c3854b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710875315779441223,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-4d26z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 61039f37-b485-4b10-966e-a30673e86b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2cda831c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a24109bbfe6a5f5bb7a26311261c752e37341e7931cbab103c20414d269689,PodSandboxId:7a2d19edfb4473b7bd4f63202f4019de0f6158e3e422ca25f30e7617d47f5ea5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710875305566817352,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-l2zx4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d174bf0b-4a12-4a7f-ba0f-29e10cfcd8f4,},Annotations:map[string]string{io.kubernetes.container.hash: dd754055,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c13f8a6b54d172ed461fbd8e5bd8a1d029f4250670bbbcaa53041af4d5bfe5f,PodSandboxId:5270ad563be7720c7cf34dd75110f0c5922c0ef7926df981173c88b911faf4ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710875254548654162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f195742-cbef-4d02-8551-5450578fe305,},Annotations:map[string]string{io.kubernetes.container.hash: cba911bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f12e0cc0be64fdf67d13b4500a9cb322b1e8e641e5c56828ae1409ef37f0382,PodSandboxId:38a3f031ce80275ec19905444d034f6421c56ce3b8a4ada7b4f1e589eb178c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875248898171681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ftlmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 962c919c-5144-4459-bac9-eb608143b937,},Annotations:map[string]string{io.kubernetes.container.hash: b2ad3e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a463f4128c052bbcc05244f1f511d09edc86a46215c7a8b48fba4ce86faa488e,PodSand
boxId:472d29d6ac274379a8482a6b092709aa0f67bf39f78a12bc56eca2928a952048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875248231329329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7zcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2374b11-f4d7-4083-95bc-6ce5f9d0505b,},Annotations:map[string]string{io.kubernetes.container.hash: e5d593a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f99dc7886e984650a1fc2217da499bef6a1508872e550c4365fd4b02f859ec,PodSandboxId:6488a9916190bb4254fc3ac9b46
57782caf5241d1ef80105605a2e4f972ad62b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875227746452074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c079dccfae9f28d331b4b74b44858e53,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e5de3e7f1600b18dabdbdcf6a13487fb12dc462fe79ec9bfc49a71370b2471,PodSandboxId:3cb41c2981951c50b8f88c1d7aa2e98688fd7f8a7e0f
aa6f799f63df98b0a72a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875227754072880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684a1fff38322b93dd29c52579ce532d,},Annotations:map[string]string{io.kubernetes.container.hash: cd38f780,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1010ef6c02f745ca9d7762586cc30ca91362d209752e2bd9ad4b71ba7956c852,PodSandboxId:ca62148896b4fffa5f5a843c1412e17ef6db804ebb0396dbbc9404639fa9a0aa,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875227746308615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1aea6e6279570fdf9c12cb48b792789,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebb05c8377dfc2146f0e1edc3240a04e51a12ee2573c085a23d8edfe438838,PodSandboxId:0f1cc1375b28527a32424db7b025d587a22ffea824b05469ff2b5682d6a4ba92,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875227671116154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bcf351ae166c4dd3e1be9efb505d41,},Annotations:map[string]string{io.kubernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e355a967-fadb-4d10-9157-9c582cd0a955 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.547205380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a062bbba-3d22-4d79-b350-819831683fc2 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.547306983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a062bbba-3d22-4d79-b350-819831683fc2 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.548904452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c003b985-42bb-4c02-b558-009e27377013 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.550125381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710875589550098975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569952,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c003b985-42bb-4c02-b558-009e27377013 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.550598385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62f373d5-6fb9-45b3-ba23-30f5a66546f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.550716356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62f373d5-6fb9-45b3-ba23-30f5a66546f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.551158073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14c8a2a2404d28521e5042f3ae775194ca3088e457af32518b1d75c66b9758ae,PodSandboxId:b2d4488037dfdde52c190569d611669fe7f2214ef1fce51d97fc1decfbe55031,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710875583182695502,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-hbnhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a0266f3-e5c5-4490-a45e-5844528a33ff,},Annotations:map[string]string{io.kubernetes.container.hash: c1525ccc,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1b754ab8f23082838de873cf69cf76d450fb6e1194990c90631dec44d3daf4,PodSandboxId:d6c853ee6559f37d61f777671af52a77390b9be7ac370e49dbfe83c036604ac7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710875442822472569,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64517821-4670-447e-8ddc-b3df143a2aae,},Annotations:map[string]string{io.kubern
etes.container.hash: 2d12a691,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd46188bb7a0c4d64bfc0f18af06422eb578387e969857a9f3e416a9b25c4a97,PodSandboxId:8644b7858c60fa4cd95ae01101801efe6995df6a939142426d6ae3a3664211ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710875435689725448,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-h94gd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 8e09ab7c-fd92-4587-85c9-9cf10b97e200,},Annotations:map[string]string{io.kubernetes.container.hash: 213908ac,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993115486f06573d9a7012f841fdf882e69b1c7d728c3334680cfcb29cb0058b,PodSandboxId:2a1c443001a25d3d3c6d3d1b0d5a288128edb571a45f9e3df8fd171922caf089,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710875403758680846,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-vcsmv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a2a3c0bb-5253-42de-93b7-6cfd0b372cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 4076cfd9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b8f0fb5be5fa6c8d1aeb749a6c5502f95d8b7b555d45e487a870e2273f3fe7,PodSandboxId:18660cd3bf470037e2a8e45670482c6fc5f1086585496d7ab672757a311b6f43,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710875323913965332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8lqdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1d0d039-899f-47a4-9521-814981244259,},Annotations:map[string]string{io.kubernetes.container.hash: 1b6a4133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbe1422ce33da339362bc28aa8f6218f248a8c72a6b30aacc1b726a72c95c8,PodSandboxId:493618b686e4c9bb6f8701596b035db8948033202a49850e34b8f3c0c674688f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710875320582936038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jtd7d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 821e0de3-f0d3-47cc-b8eb-f7b07314147c,},Annotations:map[string]string{io.kubernetes.container.hash: 19653035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19532dd978c3f719dfad70672bf56ac204695ff868de7e7eab6afdbab2114f1,PodSandboxId:ec3fb49406491ee85733f88ad9c018a526adfe2c1efaaae753a7b8e4d7c3854b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710875315779441223,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-4d26z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 61039f37-b485-4b10-966e-a30673e86b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2cda831c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a24109bbfe6a5f5bb7a26311261c752e37341e7931cbab103c20414d269689,PodSandboxId:7a2d19edfb4473b7bd4f63202f4019de0f6158e3e422ca25f30e7617d47f5ea5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710875305566817352,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-l2zx4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d174bf0b-4a12-4a7f-ba0f-29e10cfcd8f4,},Annotations:map[string]string{io.kubernetes.container.hash: dd754055,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c13f8a6b54d172ed461fbd8e5bd8a1d029f4250670bbbcaa53041af4d5bfe5f,PodSandboxId:5270ad563be7720c7cf34dd75110f0c5922c0ef7926df981173c88b911faf4ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710875254548654162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f195742-cbef-4d02-8551-5450578fe305,},Annotations:map[string]string{io.kubernetes.container.hash: cba911bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f12e0cc0be64fdf67d13b4500a9cb322b1e8e641e5c56828ae1409ef37f0382,PodSandboxId:38a3f031ce80275ec19905444d034f6421c56ce3b8a4ada7b4f1e589eb178c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875248898171681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ftlmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 962c919c-5144-4459-bac9-eb608143b937,},Annotations:map[string]string{io.kubernetes.container.hash: b2ad3e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a463f4128c052bbcc05244f1f511d09edc86a46215c7a8b48fba4ce86faa488e,PodSand
boxId:472d29d6ac274379a8482a6b092709aa0f67bf39f78a12bc56eca2928a952048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875248231329329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7zcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2374b11-f4d7-4083-95bc-6ce5f9d0505b,},Annotations:map[string]string{io.kubernetes.container.hash: e5d593a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f99dc7886e984650a1fc2217da499bef6a1508872e550c4365fd4b02f859ec,PodSandboxId:6488a9916190bb4254fc3ac9b46
57782caf5241d1ef80105605a2e4f972ad62b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875227746452074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c079dccfae9f28d331b4b74b44858e53,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e5de3e7f1600b18dabdbdcf6a13487fb12dc462fe79ec9bfc49a71370b2471,PodSandboxId:3cb41c2981951c50b8f88c1d7aa2e98688fd7f8a7e0f
aa6f799f63df98b0a72a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875227754072880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684a1fff38322b93dd29c52579ce532d,},Annotations:map[string]string{io.kubernetes.container.hash: cd38f780,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1010ef6c02f745ca9d7762586cc30ca91362d209752e2bd9ad4b71ba7956c852,PodSandboxId:ca62148896b4fffa5f5a843c1412e17ef6db804ebb0396dbbc9404639fa9a0aa,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875227746308615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1aea6e6279570fdf9c12cb48b792789,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebb05c8377dfc2146f0e1edc3240a04e51a12ee2573c085a23d8edfe438838,PodSandboxId:0f1cc1375b28527a32424db7b025d587a22ffea824b05469ff2b5682d6a4ba92,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875227671116154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bcf351ae166c4dd3e1be9efb505d41,},Annotations:map[string]string{io.kubernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62f373d5-6fb9-45b3-ba23-30f5a66546f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.586149533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c63e3b5d-a17d-42aa-a910-d970f1675f57 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.586223761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c63e3b5d-a17d-42aa-a910-d970f1675f57 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.587270864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6e39f75-5f7d-4b49-8ff4-f4723a67d698 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.588577163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710875589588552310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569952,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6e39f75-5f7d-4b49-8ff4-f4723a67d698 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.589207334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66d3a52b-6d7e-46bf-988f-781d269f58ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.589292363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66d3a52b-6d7e-46bf-988f-781d269f58ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.590101387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14c8a2a2404d28521e5042f3ae775194ca3088e457af32518b1d75c66b9758ae,PodSandboxId:b2d4488037dfdde52c190569d611669fe7f2214ef1fce51d97fc1decfbe55031,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710875583182695502,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-hbnhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a0266f3-e5c5-4490-a45e-5844528a33ff,},Annotations:map[string]string{io.kubernetes.container.hash: c1525ccc,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1b754ab8f23082838de873cf69cf76d450fb6e1194990c90631dec44d3daf4,PodSandboxId:d6c853ee6559f37d61f777671af52a77390b9be7ac370e49dbfe83c036604ac7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710875442822472569,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64517821-4670-447e-8ddc-b3df143a2aae,},Annotations:map[string]string{io.kubern
etes.container.hash: 2d12a691,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd46188bb7a0c4d64bfc0f18af06422eb578387e969857a9f3e416a9b25c4a97,PodSandboxId:8644b7858c60fa4cd95ae01101801efe6995df6a939142426d6ae3a3664211ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710875435689725448,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-h94gd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 8e09ab7c-fd92-4587-85c9-9cf10b97e200,},Annotations:map[string]string{io.kubernetes.container.hash: 213908ac,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993115486f06573d9a7012f841fdf882e69b1c7d728c3334680cfcb29cb0058b,PodSandboxId:2a1c443001a25d3d3c6d3d1b0d5a288128edb571a45f9e3df8fd171922caf089,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710875403758680846,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-vcsmv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a2a3c0bb-5253-42de-93b7-6cfd0b372cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 4076cfd9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b8f0fb5be5fa6c8d1aeb749a6c5502f95d8b7b555d45e487a870e2273f3fe7,PodSandboxId:18660cd3bf470037e2a8e45670482c6fc5f1086585496d7ab672757a311b6f43,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710875323913965332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8lqdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1d0d039-899f-47a4-9521-814981244259,},Annotations:map[string]string{io.kubernetes.container.hash: 1b6a4133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbe1422ce33da339362bc28aa8f6218f248a8c72a6b30aacc1b726a72c95c8,PodSandboxId:493618b686e4c9bb6f8701596b035db8948033202a49850e34b8f3c0c674688f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710875320582936038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jtd7d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 821e0de3-f0d3-47cc-b8eb-f7b07314147c,},Annotations:map[string]string{io.kubernetes.container.hash: 19653035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19532dd978c3f719dfad70672bf56ac204695ff868de7e7eab6afdbab2114f1,PodSandboxId:ec3fb49406491ee85733f88ad9c018a526adfe2c1efaaae753a7b8e4d7c3854b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710875315779441223,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-4d26z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 61039f37-b485-4b10-966e-a30673e86b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2cda831c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a24109bbfe6a5f5bb7a26311261c752e37341e7931cbab103c20414d269689,PodSandboxId:7a2d19edfb4473b7bd4f63202f4019de0f6158e3e422ca25f30e7617d47f5ea5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710875305566817352,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-l2zx4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d174bf0b-4a12-4a7f-ba0f-29e10cfcd8f4,},Annotations:map[string]string{io.kubernetes.container.hash: dd754055,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c13f8a6b54d172ed461fbd8e5bd8a1d029f4250670bbbcaa53041af4d5bfe5f,PodSandboxId:5270ad563be7720c7cf34dd75110f0c5922c0ef7926df981173c88b911faf4ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710875254548654162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f195742-cbef-4d02-8551-5450578fe305,},Annotations:map[string]string{io.kubernetes.container.hash: cba911bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f12e0cc0be64fdf67d13b4500a9cb322b1e8e641e5c56828ae1409ef37f0382,PodSandboxId:38a3f031ce80275ec19905444d034f6421c56ce3b8a4ada7b4f1e589eb178c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875248898171681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ftlmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 962c919c-5144-4459-bac9-eb608143b937,},Annotations:map[string]string{io.kubernetes.container.hash: b2ad3e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a463f4128c052bbcc05244f1f511d09edc86a46215c7a8b48fba4ce86faa488e,PodSand
boxId:472d29d6ac274379a8482a6b092709aa0f67bf39f78a12bc56eca2928a952048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875248231329329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7zcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2374b11-f4d7-4083-95bc-6ce5f9d0505b,},Annotations:map[string]string{io.kubernetes.container.hash: e5d593a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f99dc7886e984650a1fc2217da499bef6a1508872e550c4365fd4b02f859ec,PodSandboxId:6488a9916190bb4254fc3ac9b46
57782caf5241d1ef80105605a2e4f972ad62b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875227746452074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c079dccfae9f28d331b4b74b44858e53,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e5de3e7f1600b18dabdbdcf6a13487fb12dc462fe79ec9bfc49a71370b2471,PodSandboxId:3cb41c2981951c50b8f88c1d7aa2e98688fd7f8a7e0f
aa6f799f63df98b0a72a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875227754072880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684a1fff38322b93dd29c52579ce532d,},Annotations:map[string]string{io.kubernetes.container.hash: cd38f780,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1010ef6c02f745ca9d7762586cc30ca91362d209752e2bd9ad4b71ba7956c852,PodSandboxId:ca62148896b4fffa5f5a843c1412e17ef6db804ebb0396dbbc9404639fa9a0aa,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875227746308615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1aea6e6279570fdf9c12cb48b792789,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebb05c8377dfc2146f0e1edc3240a04e51a12ee2573c085a23d8edfe438838,PodSandboxId:0f1cc1375b28527a32424db7b025d587a22ffea824b05469ff2b5682d6a4ba92,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875227671116154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bcf351ae166c4dd3e1be9efb505d41,},Annotations:map[string]string{io.kubernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66d3a52b-6d7e-46bf-988f-781d269f58ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.643963527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=799da809-64eb-4a51-b216-afbc95ba5516 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.644043105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=799da809-64eb-4a51-b216-afbc95ba5516 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.645461648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13e549e9-94c8-4efe-97a3-b4d86e9c9e41 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.646716489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710875589646688829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:569952,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13e549e9-94c8-4efe-97a3-b4d86e9c9e41 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.647587643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4287569-4878-4f3c-bf1e-e3b344350f43 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.647702538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4287569-4878-4f3c-bf1e-e3b344350f43 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:13:09 addons-630101 crio[674]: time="2024-03-19 19:13:09.648237234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14c8a2a2404d28521e5042f3ae775194ca3088e457af32518b1d75c66b9758ae,PodSandboxId:b2d4488037dfdde52c190569d611669fe7f2214ef1fce51d97fc1decfbe55031,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710875583182695502,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-hbnhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a0266f3-e5c5-4490-a45e-5844528a33ff,},Annotations:map[string]string{io.kubernetes.container.hash: c1525ccc,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1b754ab8f23082838de873cf69cf76d450fb6e1194990c90631dec44d3daf4,PodSandboxId:d6c853ee6559f37d61f777671af52a77390b9be7ac370e49dbfe83c036604ac7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710875442822472569,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64517821-4670-447e-8ddc-b3df143a2aae,},Annotations:map[string]string{io.kubern
etes.container.hash: 2d12a691,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd46188bb7a0c4d64bfc0f18af06422eb578387e969857a9f3e416a9b25c4a97,PodSandboxId:8644b7858c60fa4cd95ae01101801efe6995df6a939142426d6ae3a3664211ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710875435689725448,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-h94gd,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 8e09ab7c-fd92-4587-85c9-9cf10b97e200,},Annotations:map[string]string{io.kubernetes.container.hash: 213908ac,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993115486f06573d9a7012f841fdf882e69b1c7d728c3334680cfcb29cb0058b,PodSandboxId:2a1c443001a25d3d3c6d3d1b0d5a288128edb571a45f9e3df8fd171922caf089,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710875403758680846,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-vcsmv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a2a3c0bb-5253-42de-93b7-6cfd0b372cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 4076cfd9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b8f0fb5be5fa6c8d1aeb749a6c5502f95d8b7b555d45e487a870e2273f3fe7,PodSandboxId:18660cd3bf470037e2a8e45670482c6fc5f1086585496d7ab672757a311b6f43,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710875323913965332,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8lqdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b1d0d039-899f-47a4-9521-814981244259,},Annotations:map[string]string{io.kubernetes.container.hash: 1b6a4133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37bbe1422ce33da339362bc28aa8f6218f248a8c72a6b30aacc1b726a72c95c8,PodSandboxId:493618b686e4c9bb6f8701596b035db8948033202a49850e34b8f3c0c674688f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710875320582936038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jtd7d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 821e0de3-f0d3-47cc-b8eb-f7b07314147c,},Annotations:map[string]string{io.kubernetes.container.hash: 19653035,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19532dd978c3f719dfad70672bf56ac204695ff868de7e7eab6afdbab2114f1,PodSandboxId:ec3fb49406491ee85733f88ad9c018a526adfe2c1efaaae753a7b8e4d7c3854b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710875315779441223,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-4d26z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 61039f37-b485-4b10-966e-a30673e86b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2cda831c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a24109bbfe6a5f5bb7a26311261c752e37341e7931cbab103c20414d269689,PodSandboxId:7a2d19edfb4473b7bd4f63202f4019de0f6158e3e422ca25f30e7617d47f5ea5,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710875305566817352,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-l2zx4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d174bf0b-4a12-4a7f-ba0f-29e10cfcd8f4,},Annotations:map[string]string{io.kubernetes.container.hash: dd754055,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c13f8a6b54d172ed461fbd8e5bd8a1d029f4250670bbbcaa53041af4d5bfe5f,PodSandboxId:5270ad563be7720c7cf34dd75110f0c5922c0ef7926df981173c88b911faf4ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710875254548654162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f195742-cbef-4d02-8551-5450578fe305,},Annotations:map[string]string{io.kubernetes.container.hash: cba911bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f12e0cc0be64fdf67d13b4500a9cb322b1e8e641e5c56828ae1409ef37f0382,PodSandboxId:38a3f031ce80275ec19905444d034f6421c56ce3b8a4ada7b4f1e589eb178c23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875248898171681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ftlmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 962c919c-5144-4459-bac9-eb608143b937,},Annotations:map[string]string{io.kubernetes.container.hash: b2ad3e1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a463f4128c052bbcc05244f1f511d09edc86a46215c7a8b48fba4ce86faa488e,PodSand
boxId:472d29d6ac274379a8482a6b092709aa0f67bf39f78a12bc56eca2928a952048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875248231329329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v7zcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2374b11-f4d7-4083-95bc-6ce5f9d0505b,},Annotations:map[string]string{io.kubernetes.container.hash: e5d593a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f99dc7886e984650a1fc2217da499bef6a1508872e550c4365fd4b02f859ec,PodSandboxId:6488a9916190bb4254fc3ac9b46
57782caf5241d1ef80105605a2e4f972ad62b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875227746452074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c079dccfae9f28d331b4b74b44858e53,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e5de3e7f1600b18dabdbdcf6a13487fb12dc462fe79ec9bfc49a71370b2471,PodSandboxId:3cb41c2981951c50b8f88c1d7aa2e98688fd7f8a7e0f
aa6f799f63df98b0a72a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875227754072880,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684a1fff38322b93dd29c52579ce532d,},Annotations:map[string]string{io.kubernetes.container.hash: cd38f780,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1010ef6c02f745ca9d7762586cc30ca91362d209752e2bd9ad4b71ba7956c852,PodSandboxId:ca62148896b4fffa5f5a843c1412e17ef6db804ebb0396dbbc9404639fa9a0aa,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875227746308615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1aea6e6279570fdf9c12cb48b792789,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebb05c8377dfc2146f0e1edc3240a04e51a12ee2573c085a23d8edfe438838,PodSandboxId:0f1cc1375b28527a32424db7b025d587a22ffea824b05469ff2b5682d6a4ba92,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875227671116154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-630101,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bcf351ae166c4dd3e1be9efb505d41,},Annotations:map[string]string{io.kubernetes.container.hash: ded600d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4287569-4878-4f3c-bf1e-e3b344350f43 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14c8a2a2404d2       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   b2d4488037dfd       hello-world-app-5d77478584-hbnhf
	1e1b754ab8f23       docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba                              2 minutes ago       Running             nginx                     0                   d6c853ee6559f       nginx
	bd46188bb7a0c       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   8644b7858c60f       headlamp-5485c556b-h94gd
	993115486f065       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   2a1c443001a25       gcp-auth-7d69788767-vcsmv
	90b8f0fb5be5f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              patch                     0                   18660cd3bf470       ingress-nginx-admission-patch-8lqdd
	37bbe1422ce33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              create                    0                   493618b686e4c       ingress-nginx-admission-create-jtd7d
	d19532dd978c3       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   ec3fb49406491       local-path-provisioner-78b46b4d5c-4d26z
	d0a24109bbfe6       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   7a2d19edfb447       yakd-dashboard-9947fc6bf-l2zx4
	3c13f8a6b54d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   5270ad563be77       storage-provisioner
	9f12e0cc0be64       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   38a3f031ce802       coredns-76f75df574-ftlmb
	a463f4128c052       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             5 minutes ago       Running             kube-proxy                0                   472d29d6ac274       kube-proxy-v7zcm
	c5e5de3e7f160       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   3cb41c2981951       etcd-addons-630101
	d8f99dc7886e9       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             6 minutes ago       Running             kube-scheduler            0                   6488a9916190b       kube-scheduler-addons-630101
	1010ef6c02f74       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             6 minutes ago       Running             kube-controller-manager   0                   ca62148896b4f       kube-controller-manager-addons-630101
	a4ebb05c8377d       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             6 minutes ago       Running             kube-apiserver            0                   0f1cc1375b285       kube-apiserver-addons-630101
	
	
	==> coredns [9f12e0cc0be64fdf67d13b4500a9cb322b1e8e641e5c56828ae1409ef37f0382] <==
	[INFO] 10.244.0.7:43501 - 3574 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102538s
	[INFO] 10.244.0.7:53969 - 32291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142411s
	[INFO] 10.244.0.7:53969 - 13345 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091642s
	[INFO] 10.244.0.7:48574 - 55324 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096944s
	[INFO] 10.244.0.7:48574 - 60446 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083252s
	[INFO] 10.244.0.7:47721 - 54273 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167202s
	[INFO] 10.244.0.7:47721 - 22023 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098331s
	[INFO] 10.244.0.7:56104 - 1530 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009631s
	[INFO] 10.244.0.7:56104 - 12537 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000227058s
	[INFO] 10.244.0.7:60800 - 47574 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082848s
	[INFO] 10.244.0.7:60800 - 18900 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061009s
	[INFO] 10.244.0.7:50176 - 25672 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049668s
	[INFO] 10.244.0.7:50176 - 55627 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088124s
	[INFO] 10.244.0.7:57975 - 7295 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038809s
	[INFO] 10.244.0.7:57975 - 53629 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000636s
	[INFO] 10.244.0.22:40719 - 43344 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000612183s
	[INFO] 10.244.0.22:56805 - 5701 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001121972s
	[INFO] 10.244.0.22:42523 - 27368 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000206481s
	[INFO] 10.244.0.22:40740 - 32874 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128353s
	[INFO] 10.244.0.22:57196 - 56227 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011405s
	[INFO] 10.244.0.22:40531 - 10791 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078515s
	[INFO] 10.244.0.22:45698 - 46940 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000946953s
	[INFO] 10.244.0.22:55704 - 52835 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.001601242s
	[INFO] 10.244.0.26:41366 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000432738s
	[INFO] 10.244.0.26:42592 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192859s
	
	
	==> describe nodes <==
	Name:               addons-630101
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-630101
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=addons-630101
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_07_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-630101
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:07:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-630101
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:13:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:11:18 +0000   Tue, 19 Mar 2024 19:07:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:11:18 +0000   Tue, 19 Mar 2024 19:07:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:11:18 +0000   Tue, 19 Mar 2024 19:07:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:11:18 +0000   Tue, 19 Mar 2024 19:07:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    addons-630101
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac57d9399fcf4d0d8a7e1aa125e90c7b
	  System UUID:                ac57d939-9fcf-4d0d-8a7e-1aa125e90c7b
	  Boot ID:                    3d58ca28-3b2d-4843-b99e-23c2e927ef39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-hbnhf           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-7d69788767-vcsmv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  headlamp                    headlamp-5485c556b-h94gd                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 coredns-76f75df574-ftlmb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m43s
	  kube-system                 etcd-addons-630101                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-apiserver-addons-630101               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-controller-manager-addons-630101      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-v7zcm                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-scheduler-addons-630101               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  local-path-storage          local-path-provisioner-78b46b4d5c-4d26z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-l2zx4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m40s                kube-proxy       
	  Normal  Starting                 6m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet          Node addons-630101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet          Node addons-630101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet          Node addons-630101 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s                kubelet          Node addons-630101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s                kubelet          Node addons-630101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s                kubelet          Node addons-630101 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m55s                kubelet          Node addons-630101 status is now: NodeReady
	  Normal  RegisteredNode           5m44s                node-controller  Node addons-630101 event: Registered Node addons-630101 in Controller
	
	
	==> dmesg <==
	[  +5.199229] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.084922] kauditd_printk_skb: 107 callbacks suppressed
	[  +8.274458] kauditd_printk_skb: 86 callbacks suppressed
	[Mar19 19:08] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.750003] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.574352] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.646252] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.707409] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.800743] kauditd_printk_skb: 12 callbacks suppressed
	[Mar19 19:09] kauditd_printk_skb: 24 callbacks suppressed
	[ +14.201216] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.199319] kauditd_printk_skb: 1 callbacks suppressed
	[Mar19 19:10] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.716892] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.164560] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.233793] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.115482] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.336678] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.193121] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.306263] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.269410] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.003225] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.261972] kauditd_printk_skb: 14 callbacks suppressed
	[Mar19 19:12] kauditd_printk_skb: 2 callbacks suppressed
	[Mar19 19:13] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [c5e5de3e7f1600b18dabdbdcf6a13487fb12dc462fe79ec9bfc49a71370b2471] <==
	{"level":"info","ts":"2024-03-19T19:08:15.750299Z","caller":"traceutil/trace.go:171","msg":"trace[1896960042] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:960; }","duration":"189.920535ms","start":"2024-03-19T19:08:15.560374Z","end":"2024-03-19T19:08:15.750294Z","steps":["trace[1896960042] 'agreement among raft nodes before linearized reading'  (duration: 189.788009ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:08:15.750428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.845001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T19:08:15.750448Z","caller":"traceutil/trace.go:171","msg":"trace[1437721700] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:960; }","duration":"225.882738ms","start":"2024-03-19T19:08:15.52456Z","end":"2024-03-19T19:08:15.750443Z","steps":["trace[1437721700] 'agreement among raft nodes before linearized reading'  (duration: 225.848739ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:09:56.737422Z","caller":"traceutil/trace.go:171","msg":"trace[1092515980] linearizableReadLoop","detail":"{readStateIndex:1318; appliedIndex:1317; }","duration":"213.146546ms","start":"2024-03-19T19:09:56.524099Z","end":"2024-03-19T19:09:56.737246Z","steps":["trace[1092515980] 'read index received'  (duration: 212.862869ms)","trace[1092515980] 'applied index is now lower than readState.Index'  (duration: 283.124µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T19:09:56.738745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.355309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-03-19T19:09:56.740055Z","caller":"traceutil/trace.go:171","msg":"trace[374476219] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1262; }","duration":"168.690538ms","start":"2024-03-19T19:09:56.571289Z","end":"2024-03-19T19:09:56.739979Z","steps":["trace[374476219] 'agreement among raft nodes before linearized reading'  (duration: 167.04538ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:09:56.740056Z","caller":"traceutil/trace.go:171","msg":"trace[343461066] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"315.542937ms","start":"2024-03-19T19:09:56.423405Z","end":"2024-03-19T19:09:56.738948Z","steps":["trace[343461066] 'process raft request'  (duration: 313.615734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:09:56.740321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.215773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T19:09:56.740484Z","caller":"traceutil/trace.go:171","msg":"trace[1677094271] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1262; }","duration":"216.400188ms","start":"2024-03-19T19:09:56.524074Z","end":"2024-03-19T19:09:56.740474Z","steps":["trace[1677094271] 'agreement among raft nodes before linearized reading'  (duration: 216.211622ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:09:56.74107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:09:56.423391Z","time spent":"316.876241ms","remote":"127.0.0.1:45846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1259 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-19T19:10:03.239138Z","caller":"traceutil/trace.go:171","msg":"trace[1473433008] linearizableReadLoop","detail":"{readStateIndex:1342; appliedIndex:1341; }","duration":"345.484402ms","start":"2024-03-19T19:10:02.893637Z","end":"2024-03-19T19:10:03.239122Z","steps":["trace[1473433008] 'read index received'  (duration: 345.324451ms)","trace[1473433008] 'applied index is now lower than readState.Index'  (duration: 159.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T19:10:03.239488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.823552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-03-19T19:10:03.239579Z","caller":"traceutil/trace.go:171","msg":"trace[1863151335] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1284; }","duration":"345.952452ms","start":"2024-03-19T19:10:02.893614Z","end":"2024-03-19T19:10:03.239566Z","steps":["trace[1863151335] 'agreement among raft nodes before linearized reading'  (duration: 345.703969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:10:03.239625Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:10:02.893602Z","time spent":"346.015586ms","remote":"127.0.0.1:45934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-03-19T19:10:03.239634Z","caller":"traceutil/trace.go:171","msg":"trace[1563205916] transaction","detail":"{read_only:false; response_revision:1284; number_of_response:1; }","duration":"460.964269ms","start":"2024-03-19T19:10:02.778656Z","end":"2024-03-19T19:10:03.23962Z","steps":["trace[1563205916] 'process raft request'  (duration: 460.343235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:10:03.239774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:10:02.778644Z","time spent":"461.065555ms","remote":"127.0.0.1:45846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1281 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-19T19:10:18.402483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.077311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12884"}
	{"level":"info","ts":"2024-03-19T19:10:18.402539Z","caller":"traceutil/trace.go:171","msg":"trace[2141086132] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1410; }","duration":"181.180444ms","start":"2024-03-19T19:10:18.221345Z","end":"2024-03-19T19:10:18.402526Z","steps":["trace[2141086132] 'range keys from in-memory index tree'  (duration: 180.876728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:10:18.402577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.525097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12884"}
	{"level":"info","ts":"2024-03-19T19:10:18.402611Z","caller":"traceutil/trace.go:171","msg":"trace[115957461] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1410; }","duration":"220.588693ms","start":"2024-03-19T19:10:18.182014Z","end":"2024-03-19T19:10:18.402603Z","steps":["trace[115957461] 'range keys from in-memory index tree'  (duration: 220.409596ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:10:33.620335Z","caller":"traceutil/trace.go:171","msg":"trace[215033215] linearizableReadLoop","detail":"{readStateIndex:1619; appliedIndex:1618; }","duration":"116.794788ms","start":"2024-03-19T19:10:33.503525Z","end":"2024-03-19T19:10:33.62032Z","steps":["trace[215033215] 'read index received'  (duration: 116.635637ms)","trace[215033215] 'applied index is now lower than readState.Index'  (duration: 158.556µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T19:10:33.620445Z","caller":"traceutil/trace.go:171","msg":"trace[1454177473] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"146.287243ms","start":"2024-03-19T19:10:33.474149Z","end":"2024-03-19T19:10:33.620436Z","steps":["trace[1454177473] 'process raft request'  (duration: 146.02991ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:10:33.620608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.067615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4325"}
	{"level":"info","ts":"2024-03-19T19:10:33.620636Z","caller":"traceutil/trace.go:171","msg":"trace[1485036775] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1545; }","duration":"117.137772ms","start":"2024-03-19T19:10:33.503488Z","end":"2024-03-19T19:10:33.620626Z","steps":["trace[1485036775] 'agreement among raft nodes before linearized reading'  (duration: 117.040834ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:11:26.67722Z","caller":"traceutil/trace.go:171","msg":"trace[362464803] transaction","detail":"{read_only:false; response_revision:1847; number_of_response:1; }","duration":"110.809252ms","start":"2024-03-19T19:11:26.566369Z","end":"2024-03-19T19:11:26.677179Z","steps":["trace[362464803] 'process raft request'  (duration: 110.670266ms)"],"step_count":1}
	
	
	==> gcp-auth [993115486f06573d9a7012f841fdf882e69b1c7d728c3334680cfcb29cb0058b] <==
	2024/03/19 19:10:03 GCP Auth Webhook started!
	2024/03/19 19:10:05 Ready to marshal response ...
	2024/03/19 19:10:05 Ready to write response ...
	2024/03/19 19:10:05 Ready to marshal response ...
	2024/03/19 19:10:05 Ready to write response ...
	2024/03/19 19:10:09 Ready to marshal response ...
	2024/03/19 19:10:09 Ready to write response ...
	2024/03/19 19:10:15 Ready to marshal response ...
	2024/03/19 19:10:15 Ready to write response ...
	2024/03/19 19:10:19 Ready to marshal response ...
	2024/03/19 19:10:19 Ready to write response ...
	2024/03/19 19:10:19 Ready to marshal response ...
	2024/03/19 19:10:19 Ready to write response ...
	2024/03/19 19:10:19 Ready to marshal response ...
	2024/03/19 19:10:19 Ready to write response ...
	2024/03/19 19:10:26 Ready to marshal response ...
	2024/03/19 19:10:26 Ready to write response ...
	2024/03/19 19:10:37 Ready to marshal response ...
	2024/03/19 19:10:37 Ready to write response ...
	2024/03/19 19:10:42 Ready to marshal response ...
	2024/03/19 19:10:42 Ready to write response ...
	2024/03/19 19:10:48 Ready to marshal response ...
	2024/03/19 19:10:48 Ready to write response ...
	2024/03/19 19:12:59 Ready to marshal response ...
	2024/03/19 19:12:59 Ready to write response ...
	
	
	==> kernel <==
	 19:13:10 up 6 min,  0 users,  load average: 1.09, 1.03, 0.56
	Linux addons-630101 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4ebb05c8377dfc2146f0e1edc3240a04e51a12ee2573c085a23d8edfe438838] <==
	E0319 19:08:20.828260       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.217.133:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.217.133:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.217.133:443: connect: connection refused
	I0319 19:08:20.882050       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0319 19:10:19.553480       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.164.121"}
	I0319 19:10:21.850252       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0319 19:10:27.263113       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0319 19:10:37.772101       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0319 19:10:38.000906       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.252.73"}
	I0319 19:10:40.739811       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0319 19:10:41.790374       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0319 19:10:51.027675       1 watch.go:253] http2: stream closed
	E0319 19:10:53.653449       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.203:8443->10.244.0.31:60788: read: connection reset by peer
	I0319 19:10:59.086265       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 19:10:59.086483       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 19:10:59.116389       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 19:10:59.116564       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 19:10:59.156298       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 19:10:59.156393       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 19:10:59.162295       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 19:10:59.162422       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 19:10:59.172113       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 19:10:59.172235       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0319 19:11:00.157771       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0319 19:11:00.173111       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0319 19:11:00.194638       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0319 19:12:59.302002       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.121.131"}
	
	
	==> kube-controller-manager [1010ef6c02f745ca9d7762586cc30ca91362d209752e2bd9ad4b71ba7956c852] <==
	W0319 19:11:41.423523       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:11:41.423621       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0319 19:12:08.765570       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:12:08.765739       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0319 19:12:14.789810       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:12:14.790028       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0319 19:12:18.807692       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:12:18.807806       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0319 19:12:39.764020       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:12:39.764109       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0319 19:12:42.696677       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:12:42.696777       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0319 19:12:50.744911       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 19:12:50.744990       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0319 19:12:59.084723       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0319 19:12:59.123416       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-hbnhf"
	I0319 19:12:59.149530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.373921ms"
	I0319 19:12:59.161786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.963125ms"
	I0319 19:12:59.161982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.7µs"
	I0319 19:12:59.169704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.224µs"
	I0319 19:13:01.564321       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0319 19:13:01.569298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="3.741µs"
	I0319 19:13:01.587130       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0319 19:13:03.829544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.890822ms"
	I0319 19:13:03.829702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.569µs"
	
	
	==> kube-proxy [a463f4128c052bbcc05244f1f511d09edc86a46215c7a8b48fba4ce86faa488e] <==
	I0319 19:07:29.190016       1 server_others.go:72] "Using iptables proxy"
	I0319 19:07:29.207696       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	I0319 19:07:29.338893       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:07:29.338914       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:07:29.338926       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:07:29.350084       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:07:29.350279       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:07:29.350329       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:07:29.351390       1 config.go:188] "Starting service config controller"
	I0319 19:07:29.351440       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:07:29.351462       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:07:29.351466       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:07:29.354252       1 config.go:315] "Starting node config controller"
	I0319 19:07:29.354290       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:07:29.453957       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 19:07:29.454016       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:07:29.455312       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d8f99dc7886e984650a1fc2217da499bef6a1508872e550c4365fd4b02f859ec] <==
	W0319 19:07:10.488308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 19:07:10.488316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 19:07:10.488353       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 19:07:10.488361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 19:07:11.354808       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 19:07:11.354986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 19:07:11.370638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 19:07:11.370803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 19:07:11.415788       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:07:11.415963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:07:11.487576       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 19:07:11.487733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 19:07:11.500405       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 19:07:11.500527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:07:11.575741       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 19:07:11.575969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 19:07:11.642376       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 19:07:11.642475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 19:07:11.682198       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 19:07:11.682298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 19:07:11.774486       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:07:11.774655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:07:11.920005       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:07:11.920194       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 19:07:13.869958       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 19:12:59 addons-630101 kubelet[1274]: I0319 19:12:59.134195    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="35661e19-b0f1-49c6-93fc-7149b160f91e" containerName="csi-resizer"
	Mar 19 19:12:59 addons-630101 kubelet[1274]: I0319 19:12:59.134201    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9073ade-2b9c-4207-8340-54a20004f6bb" containerName="liveness-probe"
	Mar 19 19:12:59 addons-630101 kubelet[1274]: I0319 19:12:59.211314    1274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psttn\" (UniqueName: \"kubernetes.io/projected/4a0266f3-e5c5-4490-a45e-5844528a33ff-kube-api-access-psttn\") pod \"hello-world-app-5d77478584-hbnhf\" (UID: \"4a0266f3-e5c5-4490-a45e-5844528a33ff\") " pod="default/hello-world-app-5d77478584-hbnhf"
	Mar 19 19:12:59 addons-630101 kubelet[1274]: I0319 19:12:59.211396    1274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4a0266f3-e5c5-4490-a45e-5844528a33ff-gcp-creds\") pod \"hello-world-app-5d77478584-hbnhf\" (UID: \"4a0266f3-e5c5-4490-a45e-5844528a33ff\") " pod="default/hello-world-app-5d77478584-hbnhf"
	Mar 19 19:13:00 addons-630101 kubelet[1274]: I0319 19:13:00.317942    1274 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xfrf\" (UniqueName: \"kubernetes.io/projected/da5b9f55-8583-40af-8c8c-cbf974077352-kube-api-access-7xfrf\") pod \"da5b9f55-8583-40af-8c8c-cbf974077352\" (UID: \"da5b9f55-8583-40af-8c8c-cbf974077352\") "
	Mar 19 19:13:00 addons-630101 kubelet[1274]: I0319 19:13:00.320582    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5b9f55-8583-40af-8c8c-cbf974077352-kube-api-access-7xfrf" (OuterVolumeSpecName: "kube-api-access-7xfrf") pod "da5b9f55-8583-40af-8c8c-cbf974077352" (UID: "da5b9f55-8583-40af-8c8c-cbf974077352"). InnerVolumeSpecName "kube-api-access-7xfrf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 19 19:13:00 addons-630101 kubelet[1274]: I0319 19:13:00.419312    1274 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7xfrf\" (UniqueName: \"kubernetes.io/projected/da5b9f55-8583-40af-8c8c-cbf974077352-kube-api-access-7xfrf\") on node \"addons-630101\" DevicePath \"\""
	Mar 19 19:13:00 addons-630101 kubelet[1274]: I0319 19:13:00.783824    1274 scope.go:117] "RemoveContainer" containerID="61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff"
	Mar 19 19:13:00 addons-630101 kubelet[1274]: I0319 19:13:00.840208    1274 scope.go:117] "RemoveContainer" containerID="61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff"
	Mar 19 19:13:00 addons-630101 kubelet[1274]: E0319 19:13:00.841272    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff\": container with ID starting with 61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff not found: ID does not exist" containerID="61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff"
	Mar 19 19:13:00 addons-630101 kubelet[1274]: I0319 19:13:00.841349    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff"} err="failed to get container status \"61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff\": rpc error: code = NotFound desc = could not find container \"61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff\": container with ID starting with 61d21cd48a031cfa28cdfd579eab9acf57627ad8790ff90a0e7a4236ead686ff not found: ID does not exist"
	Mar 19 19:13:02 addons-630101 kubelet[1274]: I0319 19:13:02.143023    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821e0de3-f0d3-47cc-b8eb-f7b07314147c" path="/var/lib/kubelet/pods/821e0de3-f0d3-47cc-b8eb-f7b07314147c/volumes"
	Mar 19 19:13:02 addons-630101 kubelet[1274]: I0319 19:13:02.144123    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d0d039-899f-47a4-9521-814981244259" path="/var/lib/kubelet/pods/b1d0d039-899f-47a4-9521-814981244259/volumes"
	Mar 19 19:13:02 addons-630101 kubelet[1274]: I0319 19:13:02.145995    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5b9f55-8583-40af-8c8c-cbf974077352" path="/var/lib/kubelet/pods/da5b9f55-8583-40af-8c8c-cbf974077352/volumes"
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.814275    1274 scope.go:117] "RemoveContainer" containerID="d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c"
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.839097    1274 scope.go:117] "RemoveContainer" containerID="d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c"
	Mar 19 19:13:04 addons-630101 kubelet[1274]: E0319 19:13:04.839655    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c\": container with ID starting with d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c not found: ID does not exist" containerID="d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c"
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.839707    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c"} err="failed to get container status \"d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c\": rpc error: code = NotFound desc = could not find container \"d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c\": container with ID starting with d27f9c90a4bcfcd096b88cc5f00d12b9e7bbb7a293d6964ba61c1443c7cddc6c not found: ID does not exist"
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.856523    1274 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-268r8\" (UniqueName: \"kubernetes.io/projected/6161388b-a89d-4f18-9ccd-9a29f419c75d-kube-api-access-268r8\") pod \"6161388b-a89d-4f18-9ccd-9a29f419c75d\" (UID: \"6161388b-a89d-4f18-9ccd-9a29f419c75d\") "
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.856569    1274 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6161388b-a89d-4f18-9ccd-9a29f419c75d-webhook-cert\") pod \"6161388b-a89d-4f18-9ccd-9a29f419c75d\" (UID: \"6161388b-a89d-4f18-9ccd-9a29f419c75d\") "
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.862048    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6161388b-a89d-4f18-9ccd-9a29f419c75d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6161388b-a89d-4f18-9ccd-9a29f419c75d" (UID: "6161388b-a89d-4f18-9ccd-9a29f419c75d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.863070    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6161388b-a89d-4f18-9ccd-9a29f419c75d-kube-api-access-268r8" (OuterVolumeSpecName: "kube-api-access-268r8") pod "6161388b-a89d-4f18-9ccd-9a29f419c75d" (UID: "6161388b-a89d-4f18-9ccd-9a29f419c75d"). InnerVolumeSpecName "kube-api-access-268r8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.957291    1274 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-268r8\" (UniqueName: \"kubernetes.io/projected/6161388b-a89d-4f18-9ccd-9a29f419c75d-kube-api-access-268r8\") on node \"addons-630101\" DevicePath \"\""
	Mar 19 19:13:04 addons-630101 kubelet[1274]: I0319 19:13:04.957334    1274 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6161388b-a89d-4f18-9ccd-9a29f419c75d-webhook-cert\") on node \"addons-630101\" DevicePath \"\""
	Mar 19 19:13:06 addons-630101 kubelet[1274]: I0319 19:13:06.126068    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6161388b-a89d-4f18-9ccd-9a29f419c75d" path="/var/lib/kubelet/pods/6161388b-a89d-4f18-9ccd-9a29f419c75d/volumes"
	
	
	==> storage-provisioner [3c13f8a6b54d172ed461fbd8e5bd8a1d029f4250670bbbcaa53041af4d5bfe5f] <==
	I0319 19:07:35.099420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 19:07:35.186990       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 19:07:35.209699       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 19:07:35.346228       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 19:07:35.346360       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-630101_6a67b5f3-2387-4256-9104-84289a338d23!
	I0319 19:07:35.346406       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"57f7c484-c46b-4694-a274-fd22b3b97b3c", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-630101_6a67b5f3-2387-4256-9104-84289a338d23 became leader
	I0319 19:07:35.464055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-630101_6a67b5f3-2387-4256-9104-84289a338d23!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630101 -n addons-630101
helpers_test.go:261: (dbg) Run:  kubectl --context addons-630101 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-630101
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-630101: exit status 82 (2m0.461775093s)

                                                
                                                
-- stdout --
	* Stopping node "addons-630101"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-630101" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-630101
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-630101: exit status 11 (21.618563203s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-630101" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-630101
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-630101: exit status 11 (6.143825742s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-630101" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-630101
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-630101: exit status 11 (6.143662927s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-630101" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.37s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (220.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1d54e4fb-68b7-432a-ad8d-32a232689162] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005042064s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-481771 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-481771 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-481771 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-481771 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-481771 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f8096954-dc2f-4620-ae7e-be242500b1db] Pending
helpers_test.go:344: "sp-pod" [f8096954-dc2f-4620-ae7e-be242500b1db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f8096954-dc2f-4620-ae7e-be242500b1db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.004211882s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-481771 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-481771 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-481771 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a773f634-6014-4677-bbb5-ea62a0f29674] Pending
helpers_test.go:344: "sp-pod" [a773f634-6014-4677-bbb5-ea62a0f29674] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-481771 -n functional-481771
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-03-19 19:23:09.782783079 +0000 UTC m=+1113.903402844
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-481771 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-481771 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-481771/192.168.39.193
Start Time:       Tue, 19 Mar 2024 19:20:09 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4ffql (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-4ffql:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age    From               Message
----    ------     ----   ----               -------
Normal  Scheduled  3m     default-scheduler  Successfully assigned default/sp-pod to functional-481771
Normal  Pulling    2m59s  kubelet            Pulling image "docker.io/nginx"
Normal  Pulled     2m50s  kubelet            Successfully pulled image "docker.io/nginx" in 1.042s (8.844s including waiting)
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-481771 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-481771 logs sp-pod -n default: exit status 1 (69.862113ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-481771 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-481771 -n functional-481771
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 logs -n 25: (1.623003904s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-481771 ssh sudo                                             | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| mount          | -p functional-481771                                                   | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-481771                                                   | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-481771 ssh findmnt                                          | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-481771                                                   | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-481771 ssh findmnt                                          | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh            | functional-481771 ssh findmnt                                          | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh            | functional-481771 ssh findmnt                                          | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-481771                                                   | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | --kill=true                                                            |                   |         |         |                     |                     |
	| license        |                                                                        | minikube          | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	| update-context | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| image          | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh            | functional-481771 ssh pgrep                                            | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| image          | functional-481771 image build -t                                       | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | localhost/my-image:functional-481771                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	| image          | functional-481771 image ls                                             | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	| image          | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | image ls --format json                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | image ls --format table                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| service        | functional-481771 service list                                         | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	| service        | functional-481771 service list                                         | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | -o json                                                                |                   |         |         |                     |                     |
	| service        | functional-481771 service                                              | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | --namespace=default --https                                            |                   |         |         |                     |                     |
	|                | --url hello-node                                                       |                   |         |         |                     |                     |
	| service        | functional-481771                                                      | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | service hello-node --url                                               |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                                       |                   |         |         |                     |                     |
	| service        | functional-481771 service                                              | functional-481771 | jenkins | v1.32.0 | 19 Mar 24 19:20 UTC | 19 Mar 24 19:20 UTC |
	|                | hello-node --url                                                       |                   |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:20:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:20:04.138637   25633 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:20:04.138896   25633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:20:04.138907   25633 out.go:304] Setting ErrFile to fd 2...
	I0319 19:20:04.138911   25633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:20:04.139174   25633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:20:04.139776   25633 out.go:298] Setting JSON to false
	I0319 19:20:04.140675   25633 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3702,"bootTime":1710872302,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:20:04.140738   25633 start.go:139] virtualization: kvm guest
	I0319 19:20:04.143068   25633 out.go:177] * [functional-481771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:20:04.144474   25633 notify.go:220] Checking for updates...
	I0319 19:20:04.146153   25633 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:20:04.147674   25633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:20:04.149089   25633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:20:04.150453   25633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:20:04.151785   25633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:20:04.153204   25633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:20:04.155015   25633 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:20:04.155388   25633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:20:04.155440   25633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:20:04.170795   25633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0319 19:20:04.171177   25633 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:20:04.171732   25633 main.go:141] libmachine: Using API Version  1
	I0319 19:20:04.171770   25633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:20:04.172117   25633 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:20:04.172304   25633 main.go:141] libmachine: (functional-481771) Calling .DriverName
	I0319 19:20:04.172551   25633 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:20:04.172811   25633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:20:04.172841   25633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:20:04.186609   25633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0319 19:20:04.187041   25633 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:20:04.187480   25633 main.go:141] libmachine: Using API Version  1
	I0319 19:20:04.187505   25633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:20:04.187791   25633 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:20:04.187938   25633 main.go:141] libmachine: (functional-481771) Calling .DriverName
	I0319 19:20:04.219725   25633 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 19:20:04.221218   25633 start.go:297] selected driver: kvm2
	I0319 19:20:04.221231   25633 start.go:901] validating driver "kvm2" against &{Name:functional-481771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-481771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:20:04.221342   25633 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:20:04.222558   25633 cni.go:84] Creating CNI manager for ""
	I0319 19:20:04.222577   25633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:20:04.222634   25633 start.go:340] cluster config:
	{Name:functional-481771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-481771 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:20:04.224484   25633 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.666282142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876190666255179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275597,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42bb76e6-9d9f-4374-8efb-b5e3effaf038 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.666990534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ceeda43-2a12-4c2b-a951-cfa93b7cb85f name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.667144140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ceeda43-2a12-4c2b-a951-cfa93b7cb85f name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.667638466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d1886dfdaa2505c3c57d22bcec903c1ef8c7c4f4b22bed4463e55a2e465eb6a,PodSandboxId:c2e81c5695f49a4150b44d78ca8fd6b2c3352ae2c7074aa970dd134163eafe11,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710876018161135522,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-fjmjg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 408b28f6-e18e-451b-b248-d263f35fa43f,},Annotations:map[string]string{io.kubernetes.container.hash: e729e823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f460620c42ff0e656647be73114d543b03901c81ab9f992d493c8f0a7469d311,PodSandboxId:cf3d6dea9de58766d516c2e34a1577097a8ce9110e72ddb2a466a5b996046576,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1710876018063608533,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-8694d4445c-ww9l6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 795a9058-4f9f-4d98-9110-1aa6aeb73d70,},Annotations:map[string]string{io.kubernetes.container.hash: e7110459,io.kubernetes.c
ontainer.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c96de25f6094335539f754f96f83f2af62b00633a20959ec6d8950f2a272cb,PodSandboxId:7cd76e3335d88105d6f20737da0a7dafbaba93dd50302116ae3b52955852b803,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710876010774541446,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-sl7hw,io.kubernetes.pod.namespace: kubernetes
-dashboard,io.kubernetes.pod.uid: 85450237-1652-411f-b7e4-f5a957926869,},Annotations:map[string]string{io.kubernetes.container.hash: babf4c69,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f976383c6c3693e052a99139ae646f64efde7026edcf2048b80d06acfe98b40b,PodSandboxId:5b8980b351af87a0c69ee66ed356bf6eeb97dc007c9e6cf4a8660f458fabc943,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710876005224332208,Labels:map[string]string{io.kubernetes.container.name:
mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3c21b875-65ac-4bd8-92bc-8f9b5609e8e8,},Annotations:map[string]string{io.kubernetes.container.hash: c1e73ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e13815a1fe64975ad2b406a1733dfa84359dbbfd3273905ccd894bb3ce750ae,PodSandboxId:0e70e5b151bc24c97f8738e8215b5c6d32f60fab77363ebff5e242a3e6bfbed2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710875991876269508,Labels:map[string]string{io.kubernetes.container.name: echo
server,io.kubernetes.pod.name: hello-node-connect-55497b8b78-zlkt8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34d14a-7d9c-4be0-a3d5-7fcc12f3d144,},Annotations:map[string]string{io.kubernetes.container.hash: da0b363d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e39aace946f6eecada30fd7e5f9eb6e047b906b5b3e5bf70225cd8f223213e,PodSandboxId:8444a8cbe2107d9afcc3c6ee8d72805821e72779937d6605c9ef4d560e3de601,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710875986977972953,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-859648c796-jl558,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90646cfc-5c68-4e6e-ad3a-2c2b7dfe7ce3,},Annotations:map[string]string{io.kubernetes.container.hash: d1d499cd,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b064ab99a09bb449c1ca27b71c172ad4a6753a8e971f8ffdebebea3426fabc,PodSandboxId:ff7da47f8841f231815c6e0d645c05ef4e3e6b9b17194d1c32086958a6e5bb4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1710875944196298226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc2bb6d8d4bf79c99fc37d89325c36e9c630ea49e4a3ab252db7134cd090be81,PodSandboxId:7a108fe6c460c2b2a76cb59b002d777228f1f76904345e21a36a05477d465789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875944199575178,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da5b832ad47171c8796a1bd5fe175c921d52590450765b7e4240ae5f3dd4566,PodSandboxId:e83cfbe9dec30c42eeded084d2517572a564aa15fab5435eb81b80775771d396,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875944206854757,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fe6a90392e2636cb06983e6534b5f9e4e7a0a84553283bb45346723642765f,PodSandboxId:e602c1b081116ea9abbc1a66ec5be322ed5b1a10545ccdf4da01fb647ceeb1a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875940611965125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6781f4c3a4c63e5d4b00fc16c068f7e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8694c69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63a1b085d4b0bda187d445f08f5bb09bae37437126fbcdd289bdb65e791e3a2,PodSandboxId:6cea0abe63b766120b6f7bc17c57c05f3f5734822043d17fd6686abab739becb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875940393973923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3981755cba439f7a565b89e36d07023cf3748e7a97d4e53f62350c82c2977,PodSandboxId:ae7a203263d22eb0660f95542990dba38b6f7a48f58cd07966c65403df6ddd3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875940384931512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639e34b6141c805f4e89fb7fc53677b480740ef0386d1c10a1cde242d455f2f,PodSandboxId:2351d1a012cc33e21172175e9c37e6b103a324d89689550914913e243eb23f57,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875940370808618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b804c7636873c151fa287dc408a68ec8e0f2946db1616ec5a5249d445a01e7e5,PodSandboxId:d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08
ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710875903608512604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13de1221d2ab6198bdb2e2e38298d8a04a4a92e88d534e4a698015584edd6a59,PodSandboxId:4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710875903424198117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe781f2ab111797470920154f60fae0b0bd4c68c2214df55d72d6af98911982c,PodSandboxId:4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710875903395953446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c06cf4a71baedc6c17501a68ba76c8c5416f84d66a82063fe4daec34d34b96a,PodSandboxId:86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d
98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710875898519820538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7839c9edd38c103f5121bfa4eeb59337528ebb1ba6a250229b1c307cbad8dc86,PodSandboxId:429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f
97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710875898480656315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d43f6580d0710552fd406e22e4a70c23a7c0da4e32554a30604edaa4e1bd3e9,PodSandboxId:db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1
f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710875898488037097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ceeda43-2a12-4c2b-a951-cfa93b7cb85f name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.713307121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1663460c-c4cd-43fd-ae0d-3e72169186ab name=/runtime.v1.RuntimeService/Version
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.713462153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1663460c-c4cd-43fd-ae0d-3e72169186ab name=/runtime.v1.RuntimeService/Version
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.715778070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2880665-4240-4f93-85b3-ba74b8e1fd71 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.717307009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876190717281212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275597,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2880665-4240-4f93-85b3-ba74b8e1fd71 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.717966041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39b44549-c6f0-405e-8bff-8e8a0bee5e9e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.718049205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39b44549-c6f0-405e-8bff-8e8a0bee5e9e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.718673979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d1886dfdaa2505c3c57d22bcec903c1ef8c7c4f4b22bed4463e55a2e465eb6a,PodSandboxId:c2e81c5695f49a4150b44d78ca8fd6b2c3352ae2c7074aa970dd134163eafe11,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710876018161135522,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-fjmjg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 408b28f6-e18e-451b-b248-d263f35fa43f,},Annotations:map[string]string{io.kubernetes.container.hash: e729e823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f460620c42ff0e656647be73114d543b03901c81ab9f992d493c8f0a7469d311,PodSandboxId:cf3d6dea9de58766d516c2e34a1577097a8ce9110e72ddb2a466a5b996046576,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1710876018063608533,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-8694d4445c-ww9l6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 795a9058-4f9f-4d98-9110-1aa6aeb73d70,},Annotations:map[string]string{io.kubernetes.container.hash: e7110459,io.kubernetes.c
ontainer.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c96de25f6094335539f754f96f83f2af62b00633a20959ec6d8950f2a272cb,PodSandboxId:7cd76e3335d88105d6f20737da0a7dafbaba93dd50302116ae3b52955852b803,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710876010774541446,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-sl7hw,io.kubernetes.pod.namespace: kubernetes
-dashboard,io.kubernetes.pod.uid: 85450237-1652-411f-b7e4-f5a957926869,},Annotations:map[string]string{io.kubernetes.container.hash: babf4c69,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f976383c6c3693e052a99139ae646f64efde7026edcf2048b80d06acfe98b40b,PodSandboxId:5b8980b351af87a0c69ee66ed356bf6eeb97dc007c9e6cf4a8660f458fabc943,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710876005224332208,Labels:map[string]string{io.kubernetes.container.name:
mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3c21b875-65ac-4bd8-92bc-8f9b5609e8e8,},Annotations:map[string]string{io.kubernetes.container.hash: c1e73ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e13815a1fe64975ad2b406a1733dfa84359dbbfd3273905ccd894bb3ce750ae,PodSandboxId:0e70e5b151bc24c97f8738e8215b5c6d32f60fab77363ebff5e242a3e6bfbed2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710875991876269508,Labels:map[string]string{io.kubernetes.container.name: echo
server,io.kubernetes.pod.name: hello-node-connect-55497b8b78-zlkt8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34d14a-7d9c-4be0-a3d5-7fcc12f3d144,},Annotations:map[string]string{io.kubernetes.container.hash: da0b363d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e39aace946f6eecada30fd7e5f9eb6e047b906b5b3e5bf70225cd8f223213e,PodSandboxId:8444a8cbe2107d9afcc3c6ee8d72805821e72779937d6605c9ef4d560e3de601,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710875986977972953,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-859648c796-jl558,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90646cfc-5c68-4e6e-ad3a-2c2b7dfe7ce3,},Annotations:map[string]string{io.kubernetes.container.hash: d1d499cd,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b064ab99a09bb449c1ca27b71c172ad4a6753a8e971f8ffdebebea3426fabc,PodSandboxId:ff7da47f8841f231815c6e0d645c05ef4e3e6b9b17194d1c32086958a6e5bb4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1710875944196298226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc2bb6d8d4bf79c99fc37d89325c36e9c630ea49e4a3ab252db7134cd090be81,PodSandboxId:7a108fe6c460c2b2a76cb59b002d777228f1f76904345e21a36a05477d465789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875944199575178,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da5b832ad47171c8796a1bd5fe175c921d52590450765b7e4240ae5f3dd4566,PodSandboxId:e83cfbe9dec30c42eeded084d2517572a564aa15fab5435eb81b80775771d396,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875944206854757,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fe6a90392e2636cb06983e6534b5f9e4e7a0a84553283bb45346723642765f,PodSandboxId:e602c1b081116ea9abbc1a66ec5be322ed5b1a10545ccdf4da01fb647ceeb1a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875940611965125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6781f4c3a4c63e5d4b00fc16c068f7e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8694c69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63a1b085d4b0bda187d445f08f5bb09bae37437126fbcdd289bdb65e791e3a2,PodSandboxId:6cea0abe63b766120b6f7bc17c57c05f3f5734822043d17fd6686abab739becb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875940393973923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3981755cba439f7a565b89e36d07023cf3748e7a97d4e53f62350c82c2977,PodSandboxId:ae7a203263d22eb0660f95542990dba38b6f7a48f58cd07966c65403df6ddd3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875940384931512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639e34b6141c805f4e89fb7fc53677b480740ef0386d1c10a1cde242d455f2f,PodSandboxId:2351d1a012cc33e21172175e9c37e6b103a324d89689550914913e243eb23f57,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875940370808618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b804c7636873c151fa287dc408a68ec8e0f2946db1616ec5a5249d445a01e7e5,PodSandboxId:d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08
ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710875903608512604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13de1221d2ab6198bdb2e2e38298d8a04a4a92e88d534e4a698015584edd6a59,PodSandboxId:4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710875903424198117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe781f2ab111797470920154f60fae0b0bd4c68c2214df55d72d6af98911982c,PodSandboxId:4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710875903395953446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c06cf4a71baedc6c17501a68ba76c8c5416f84d66a82063fe4daec34d34b96a,PodSandboxId:86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d
98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710875898519820538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7839c9edd38c103f5121bfa4eeb59337528ebb1ba6a250229b1c307cbad8dc86,PodSandboxId:429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f
97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710875898480656315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d43f6580d0710552fd406e22e4a70c23a7c0da4e32554a30604edaa4e1bd3e9,PodSandboxId:db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1
f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710875898488037097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39b44549-c6f0-405e-8bff-8e8a0bee5e9e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.754434099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=074f4d84-d51f-43c4-bef0-8fe33ebcc992 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.754502207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=074f4d84-d51f-43c4-bef0-8fe33ebcc992 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.756326285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac72a120-65af-4fea-a8a7-052bca8842c9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.757582189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876190757557676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275597,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac72a120-65af-4fea-a8a7-052bca8842c9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.758213523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f962ac03-8b97-4409-ab37-34c1662b8cea name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.758447303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f962ac03-8b97-4409-ab37-34c1662b8cea name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.758824065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d1886dfdaa2505c3c57d22bcec903c1ef8c7c4f4b22bed4463e55a2e465eb6a,PodSandboxId:c2e81c5695f49a4150b44d78ca8fd6b2c3352ae2c7074aa970dd134163eafe11,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710876018161135522,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-fjmjg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 408b28f6-e18e-451b-b248-d263f35fa43f,},Annotations:map[string]string{io.kubernetes.container.hash: e729e823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f460620c42ff0e656647be73114d543b03901c81ab9f992d493c8f0a7469d311,PodSandboxId:cf3d6dea9de58766d516c2e34a1577097a8ce9110e72ddb2a466a5b996046576,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1710876018063608533,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-8694d4445c-ww9l6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 795a9058-4f9f-4d98-9110-1aa6aeb73d70,},Annotations:map[string]string{io.kubernetes.container.hash: e7110459,io.kubernetes.c
ontainer.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c96de25f6094335539f754f96f83f2af62b00633a20959ec6d8950f2a272cb,PodSandboxId:7cd76e3335d88105d6f20737da0a7dafbaba93dd50302116ae3b52955852b803,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710876010774541446,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-sl7hw,io.kubernetes.pod.namespace: kubernetes
-dashboard,io.kubernetes.pod.uid: 85450237-1652-411f-b7e4-f5a957926869,},Annotations:map[string]string{io.kubernetes.container.hash: babf4c69,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f976383c6c3693e052a99139ae646f64efde7026edcf2048b80d06acfe98b40b,PodSandboxId:5b8980b351af87a0c69ee66ed356bf6eeb97dc007c9e6cf4a8660f458fabc943,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710876005224332208,Labels:map[string]string{io.kubernetes.container.name:
mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3c21b875-65ac-4bd8-92bc-8f9b5609e8e8,},Annotations:map[string]string{io.kubernetes.container.hash: c1e73ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e13815a1fe64975ad2b406a1733dfa84359dbbfd3273905ccd894bb3ce750ae,PodSandboxId:0e70e5b151bc24c97f8738e8215b5c6d32f60fab77363ebff5e242a3e6bfbed2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710875991876269508,Labels:map[string]string{io.kubernetes.container.name: echo
server,io.kubernetes.pod.name: hello-node-connect-55497b8b78-zlkt8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34d14a-7d9c-4be0-a3d5-7fcc12f3d144,},Annotations:map[string]string{io.kubernetes.container.hash: da0b363d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e39aace946f6eecada30fd7e5f9eb6e047b906b5b3e5bf70225cd8f223213e,PodSandboxId:8444a8cbe2107d9afcc3c6ee8d72805821e72779937d6605c9ef4d560e3de601,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710875986977972953,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-859648c796-jl558,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90646cfc-5c68-4e6e-ad3a-2c2b7dfe7ce3,},Annotations:map[string]string{io.kubernetes.container.hash: d1d499cd,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b064ab99a09bb449c1ca27b71c172ad4a6753a8e971f8ffdebebea3426fabc,PodSandboxId:ff7da47f8841f231815c6e0d645c05ef4e3e6b9b17194d1c32086958a6e5bb4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1710875944196298226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc2bb6d8d4bf79c99fc37d89325c36e9c630ea49e4a3ab252db7134cd090be81,PodSandboxId:7a108fe6c460c2b2a76cb59b002d777228f1f76904345e21a36a05477d465789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875944199575178,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da5b832ad47171c8796a1bd5fe175c921d52590450765b7e4240ae5f3dd4566,PodSandboxId:e83cfbe9dec30c42eeded084d2517572a564aa15fab5435eb81b80775771d396,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875944206854757,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fe6a90392e2636cb06983e6534b5f9e4e7a0a84553283bb45346723642765f,PodSandboxId:e602c1b081116ea9abbc1a66ec5be322ed5b1a10545ccdf4da01fb647ceeb1a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875940611965125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6781f4c3a4c63e5d4b00fc16c068f7e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8694c69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63a1b085d4b0bda187d445f08f5bb09bae37437126fbcdd289bdb65e791e3a2,PodSandboxId:6cea0abe63b766120b6f7bc17c57c05f3f5734822043d17fd6686abab739becb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875940393973923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3981755cba439f7a565b89e36d07023cf3748e7a97d4e53f62350c82c2977,PodSandboxId:ae7a203263d22eb0660f95542990dba38b6f7a48f58cd07966c65403df6ddd3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875940384931512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639e34b6141c805f4e89fb7fc53677b480740ef0386d1c10a1cde242d455f2f,PodSandboxId:2351d1a012cc33e21172175e9c37e6b103a324d89689550914913e243eb23f57,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875940370808618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b804c7636873c151fa287dc408a68ec8e0f2946db1616ec5a5249d445a01e7e5,PodSandboxId:d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08
ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710875903608512604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13de1221d2ab6198bdb2e2e38298d8a04a4a92e88d534e4a698015584edd6a59,PodSandboxId:4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710875903424198117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe781f2ab111797470920154f60fae0b0bd4c68c2214df55d72d6af98911982c,PodSandboxId:4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710875903395953446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c06cf4a71baedc6c17501a68ba76c8c5416f84d66a82063fe4daec34d34b96a,PodSandboxId:86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d
98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710875898519820538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7839c9edd38c103f5121bfa4eeb59337528ebb1ba6a250229b1c307cbad8dc86,PodSandboxId:429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f
97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710875898480656315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d43f6580d0710552fd406e22e4a70c23a7c0da4e32554a30604edaa4e1bd3e9,PodSandboxId:db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1
f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710875898488037097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f962ac03-8b97-4409-ab37-34c1662b8cea name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.797279036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=694938be-e42e-4645-ae23-bdee9e7b6ed5 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.797477647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=694938be-e42e-4645-ae23-bdee9e7b6ed5 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.799130342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c47c7a32-2207-4fc9-b0a4-f6c2aa58027a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.800295663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876190800268203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275597,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c47c7a32-2207-4fc9-b0a4-f6c2aa58027a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.801087613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03872bdd-758b-4536-bf8c-a05ffd36b1ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.801165744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03872bdd-758b-4536-bf8c-a05ffd36b1ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:23:10 functional-481771 crio[3972]: time="2024-03-19 19:23:10.801639644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d1886dfdaa2505c3c57d22bcec903c1ef8c7c4f4b22bed4463e55a2e465eb6a,PodSandboxId:c2e81c5695f49a4150b44d78ca8fd6b2c3352ae2c7074aa970dd134163eafe11,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710876018161135522,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-fjmjg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 408b28f6-e18e-451b-b248-d263f35fa43f,},Annotations:map[string]string{io.kubernetes.container.hash: e729e823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f460620c42ff0e656647be73114d543b03901c81ab9f992d493c8f0a7469d311,PodSandboxId:cf3d6dea9de58766d516c2e34a1577097a8ce9110e72ddb2a466a5b996046576,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1710876018063608533,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-8694d4445c-ww9l6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 795a9058-4f9f-4d98-9110-1aa6aeb73d70,},Annotations:map[string]string{io.kubernetes.container.hash: e7110459,io.kubernetes.c
ontainer.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9c96de25f6094335539f754f96f83f2af62b00633a20959ec6d8950f2a272cb,PodSandboxId:7cd76e3335d88105d6f20737da0a7dafbaba93dd50302116ae3b52955852b803,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710876010774541446,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-sl7hw,io.kubernetes.pod.namespace: kubernetes
-dashboard,io.kubernetes.pod.uid: 85450237-1652-411f-b7e4-f5a957926869,},Annotations:map[string]string{io.kubernetes.container.hash: babf4c69,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f976383c6c3693e052a99139ae646f64efde7026edcf2048b80d06acfe98b40b,PodSandboxId:5b8980b351af87a0c69ee66ed356bf6eeb97dc007c9e6cf4a8660f458fabc943,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710876005224332208,Labels:map[string]string{io.kubernetes.container.name:
mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3c21b875-65ac-4bd8-92bc-8f9b5609e8e8,},Annotations:map[string]string{io.kubernetes.container.hash: c1e73ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e13815a1fe64975ad2b406a1733dfa84359dbbfd3273905ccd894bb3ce750ae,PodSandboxId:0e70e5b151bc24c97f8738e8215b5c6d32f60fab77363ebff5e242a3e6bfbed2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710875991876269508,Labels:map[string]string{io.kubernetes.container.name: echo
server,io.kubernetes.pod.name: hello-node-connect-55497b8b78-zlkt8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34d14a-7d9c-4be0-a3d5-7fcc12f3d144,},Annotations:map[string]string{io.kubernetes.container.hash: da0b363d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e39aace946f6eecada30fd7e5f9eb6e047b906b5b3e5bf70225cd8f223213e,PodSandboxId:8444a8cbe2107d9afcc3c6ee8d72805821e72779937d6605c9ef4d560e3de601,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710875986977972953,Labels:map[string]string{io.kubernetes.container.na
me: mysql,io.kubernetes.pod.name: mysql-859648c796-jl558,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90646cfc-5c68-4e6e-ad3a-2c2b7dfe7ce3,},Annotations:map[string]string{io.kubernetes.container.hash: d1d499cd,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b064ab99a09bb449c1ca27b71c172ad4a6753a8e971f8ffdebebea3426fabc,PodSandboxId:ff7da47f8841f231815c6e0d645c05ef4e3e6b9b17194d1c32086958a6e5bb4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1710875944196298226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc2bb6d8d4bf79c99fc37d89325c36e9c630ea49e4a3ab252db7134cd090be81,PodSandboxId:7a108fe6c460c2b2a76cb59b002d777228f1f76904345e21a36a05477d465789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710875944199575178,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da5b832ad47171c8796a1bd5fe175c921d52590450765b7e4240ae5f3dd4566,PodSandboxId:e83cfbe9dec30c42eeded084d2517572a564aa15fab5435eb81b80775771d396,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710875944206854757,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fe6a90392e2636cb06983e6534b5f9e4e7a0a84553283bb45346723642765f,PodSandboxId:e602c1b081116ea9abbc1a66ec5be322ed5b1a10545ccdf4da01fb647ceeb1a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710875940611965125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6781f4c3a4c63e5d4b00fc16c068f7e9,},Annotations:map[string]string{io.kubernetes.container.hash: f8694c69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63a1b085d4b0bda187d445f08f5bb09bae37437126fbcdd289bdb65e791e3a2,PodSandboxId:6cea0abe63b766120b6f7bc17c57c05f3f5734822043d17fd6686abab739becb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710875940393973923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3981755cba439f7a565b89e36d07023cf3748e7a97d4e53f62350c82c2977,PodSandboxId:ae7a203263d22eb0660f95542990dba38b6f7a48f58cd07966c65403df6ddd3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710875940384931512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639e34b6141c805f4e89fb7fc53677b480740ef0386d1c10a1cde242d455f2f,PodSandboxId:2351d1a012cc33e21172175e9c37e6b103a324d89689550914913e243eb23f57,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710875940370808618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b804c7636873c151fa287dc408a68ec8e0f2946db1616ec5a5249d445a01e7e5,PodSandboxId:d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08
ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710875903608512604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6f82b82f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13de1221d2ab6198bdb2e2e38298d8a04a4a92e88d534e4a698015584edd6a59,PodSandboxId:4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710875903424198117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d54e4fb-68b7-432a-ad8d-32a232689162,},Annotations:map[string]string{io.kubernetes.container.hash: 30def411,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe781f2ab111797470920154f60fae0b0bd4c68c2214df55d72d6af98911982c,PodSandboxId:4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,}
,Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710875903395953446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj8qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205df5af-0d30-49c2-9a59-f3f59acac431,},Annotations:map[string]string{io.kubernetes.container.hash: 33340787,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c06cf4a71baedc6c17501a68ba76c8c5416f84d66a82063fe4daec34d34b96a,PodSandboxId:86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d
98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710875898519820538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef80d0d5f467e85879d22835470eeeab,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7839c9edd38c103f5121bfa4eeb59337528ebb1ba6a250229b1c307cbad8dc86,PodSandboxId:429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f
97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710875898480656315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74290a2a2871a79f994e00ecc865513d,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d43f6580d0710552fd406e22e4a70c23a7c0da4e32554a30604edaa4e1bd3e9,PodSandboxId:db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1
f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710875898488037097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-481771,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72dfde4fa877784b1531a1a323f5fa1,},Annotations:map[string]string{io.kubernetes.container.hash: d99ce362,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03872bdd-758b-4536-bf8c-a05ffd36b1ce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6d1886dfdaa25       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 2 minutes ago       Running             echoserver                  0                   c2e81c5695f49       hello-node-d7447cc7f-fjmjg
	f460620c42ff0       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   cf3d6dea9de58       kubernetes-dashboard-8694d4445c-ww9l6
	d9c96de25f609       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   3 minutes ago       Running             dashboard-metrics-scraper   0                   7cd76e3335d88       dashboard-metrics-scraper-7fd5cb4ddc-sl7hw
	f976383c6c369       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   5b8980b351af8       busybox-mount
	0e13815a1fe64       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   0e70e5b151bc2       hello-node-connect-55497b8b78-zlkt8
	87e39aace946f       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  3 minutes ago       Running             mysql                       0                   8444a8cbe2107       mysql-859648c796-jl558
	5da5b832ad471       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 4 minutes ago       Running             coredns                     2                   e83cfbe9dec30       coredns-76f75df574-qzm9h
	dc2bb6d8d4bf7       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                 4 minutes ago       Running             kube-proxy                  2                   7a108fe6c460c       kube-proxy-rj8qz
	26b064ab99a09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Running             storage-provisioner         2                   ff7da47f8841f       storage-provisioner
	c2fe6a90392e2       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                 4 minutes ago       Running             kube-apiserver              0                   e602c1b081116       kube-apiserver-functional-481771
	e63a1b085d4b0       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                 4 minutes ago       Running             kube-controller-manager     2                   6cea0abe63b76       kube-controller-manager-functional-481771
	66b3981755cba       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                 4 minutes ago       Running             kube-scheduler              2                   ae7a203263d22       kube-scheduler-functional-481771
	1639e34b6141c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 4 minutes ago       Running             etcd                        2                   2351d1a012cc3       etcd-functional-481771
	b804c7636873c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 4 minutes ago       Exited              coredns                     1                   d156c7bae59a7       coredns-76f75df574-qzm9h
	13de1221d2ab6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         1                   4bea36ad52cde       storage-provisioner
	fe781f2ab1117       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                 4 minutes ago       Exited              kube-proxy                  1                   4b1fc8dcd7271       kube-proxy-rj8qz
	0c06cf4a71bae       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                 4 minutes ago       Exited              kube-scheduler              1                   86329c6d0a0d7       kube-scheduler-functional-481771
	4d43f6580d071       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 4 minutes ago       Exited              etcd                        1                   db3156b6f49b4       etcd-functional-481771
	7839c9edd38c1       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                 4 minutes ago       Exited              kube-controller-manager     1                   429e93b456d40       kube-controller-manager-functional-481771
	
	
	==> coredns [5da5b832ad47171c8796a1bd5fe175c921d52590450765b7e4240ae5f3dd4566] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53180 - 19542 "HINFO IN 922482856874874404.4926208019014712656. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014946523s
	
	
	==> coredns [b804c7636873c151fa287dc408a68ec8e0f2946db1616ec5a5249d445a01e7e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40140 - 37506 "HINFO IN 8453730974162945905.8393471969296351046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011464848s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-481771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-481771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=functional-481771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_17_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:17:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-481771
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:23:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:20:35 +0000   Tue, 19 Mar 2024 19:17:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:20:35 +0000   Tue, 19 Mar 2024 19:17:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:20:35 +0000   Tue, 19 Mar 2024 19:17:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:20:35 +0000   Tue, 19 Mar 2024 19:17:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    functional-481771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d19112cc1c24b57af991679c1d24f00
	  System UUID:                6d19112c-c1c2-4b57-af99-1679c1d24f00
	  Boot ID:                    93eaa87d-b139-48c9-b73c-8aeb4abbe36b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-55497b8b78-zlkt8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  default                     hello-node-d7447cc7f-fjmjg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  default                     mysql-859648c796-jl558                        600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    3m41s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-76f75df574-qzm9h                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m13s
	  kube-system                 etcd-functional-481771                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-apiserver-functional-481771              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-functional-481771     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-proxy-rj8qz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-functional-481771              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-sl7hw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-ww9l6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m26s                  kubelet          Node functional-481771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s                  kubelet          Node functional-481771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s                  kubelet          Node functional-481771 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m26s                  kubelet          Node functional-481771 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m14s                  node-controller  Node functional-481771 event: Registered Node functional-481771 in Controller
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node functional-481771 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node functional-481771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node functional-481771 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m37s                  node-controller  Node functional-481771 event: Registered Node functional-481771 in Controller
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m12s)  kubelet          Node functional-481771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m12s)  kubelet          Node functional-481771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m12s)  kubelet          Node functional-481771 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node functional-481771 event: Registered Node functional-481771 in Controller
	
	
	==> dmesg <==
	[  +0.302636] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +4.858607] systemd-fstab-generator[2351]: Ignoring "noauto" option for root device
	[  +0.075015] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.902353] systemd-fstab-generator[2474]: Ignoring "noauto" option for root device
	[  +5.569789] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.370207] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.434146] systemd-fstab-generator[3163]: Ignoring "noauto" option for root device
	[ +18.383306] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[  +0.078068] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.065078] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[  +0.179502] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[  +0.144441] systemd-fstab-generator[3928]: Ignoring "noauto" option for root device
	[  +0.286136] systemd-fstab-generator[3956]: Ignoring "noauto" option for root device
	[  +0.807391] systemd-fstab-generator[4050]: Ignoring "noauto" option for root device
	[  +2.753583] systemd-fstab-generator[4471]: Ignoring "noauto" option for root device
	[  +0.848065] kauditd_printk_skb: 202 callbacks suppressed
	[Mar19 19:19] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.594382] systemd-fstab-generator[4995]: Ignoring "noauto" option for root device
	[  +6.871496] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.002092] kauditd_printk_skb: 17 callbacks suppressed
	[  +9.894393] kauditd_printk_skb: 16 callbacks suppressed
	[ +11.851193] kauditd_printk_skb: 2 callbacks suppressed
	[Mar19 19:20] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.309365] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.357825] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [1639e34b6141c805f4e89fb7fc53677b480740ef0386d1c10a1cde242d455f2f] <==
	{"level":"info","ts":"2024-03-19T19:19:44.733184Z","caller":"traceutil/trace.go:171","msg":"trace[172426837] linearizableReadLoop","detail":"{readStateIndex:741; appliedIndex:740; }","duration":"429.653057ms","start":"2024-03-19T19:19:44.30352Z","end":"2024-03-19T19:19:44.733173Z","steps":["trace[172426837] 'read index received'  (duration: 428.159825ms)","trace[172426837] 'applied index is now lower than readState.Index'  (duration: 1.492873ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T19:19:44.733484Z","caller":"traceutil/trace.go:171","msg":"trace[1596894795] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"442.938403ms","start":"2024-03-19T19:19:44.290537Z","end":"2024-03-19T19:19:44.733475Z","steps":["trace[1596894795] 'process raft request'  (duration: 442.576883ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:19:44.733545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:19:44.290521Z","time spent":"442.983253ms","remote":"127.0.0.1:55906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-481771\" mod_revision:659 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-481771\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-481771\" > >"}
	{"level":"warn","ts":"2024-03-19T19:19:44.733732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.206955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-03-19T19:19:44.733758Z","caller":"traceutil/trace.go:171","msg":"trace[501079081] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:681; }","duration":"430.255614ms","start":"2024-03-19T19:19:44.303496Z","end":"2024-03-19T19:19:44.733751Z","steps":["trace[501079081] 'agreement among raft nodes before linearized reading'  (duration: 430.132017ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:19:44.733774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:19:44.303483Z","time spent":"430.287883ms","remote":"127.0.0.1:55846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-03-19T19:19:44.733899Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.659088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8819"}
	{"level":"info","ts":"2024-03-19T19:19:44.733913Z","caller":"traceutil/trace.go:171","msg":"trace[677562166] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:681; }","duration":"195.692281ms","start":"2024-03-19T19:19:44.538216Z","end":"2024-03-19T19:19:44.733908Z","steps":["trace[677562166] 'agreement among raft nodes before linearized reading'  (duration: 195.639526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:19:44.734197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.771677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T19:19:44.734215Z","caller":"traceutil/trace.go:171","msg":"trace[1995369889] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:681; }","duration":"101.791952ms","start":"2024-03-19T19:19:44.632418Z","end":"2024-03-19T19:19:44.73421Z","steps":["trace[1995369889] 'agreement among raft nodes before linearized reading'  (duration: 101.76238ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:19:52.950564Z","caller":"traceutil/trace.go:171","msg":"trace[1709674053] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"156.338684ms","start":"2024-03-19T19:19:52.794211Z","end":"2024-03-19T19:19:52.950549Z","steps":["trace[1709674053] 'process raft request'  (duration: 156.22638ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:19:52.950974Z","caller":"traceutil/trace.go:171","msg":"trace[114821720] linearizableReadLoop","detail":"{readStateIndex:760; appliedIndex:760; }","duration":"116.210355ms","start":"2024-03-19T19:19:52.834753Z","end":"2024-03-19T19:19:52.950964Z","steps":["trace[114821720] 'read index received'  (duration: 116.206395ms)","trace[114821720] 'applied index is now lower than readState.Index'  (duration: 3.211µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T19:19:52.951205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.779785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8953"}
	{"level":"info","ts":"2024-03-19T19:19:52.951267Z","caller":"traceutil/trace.go:171","msg":"trace[770655787] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:698; }","duration":"112.929381ms","start":"2024-03-19T19:19:52.83833Z","end":"2024-03-19T19:19:52.951259Z","steps":["trace[770655787] 'agreement among raft nodes before linearized reading'  (duration: 112.786437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:19:52.951524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.765465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8953"}
	{"level":"info","ts":"2024-03-19T19:19:52.951573Z","caller":"traceutil/trace.go:171","msg":"trace[1296464425] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:698; }","duration":"116.838797ms","start":"2024-03-19T19:19:52.834728Z","end":"2024-03-19T19:19:52.951567Z","steps":["trace[1296464425] 'agreement among raft nodes before linearized reading'  (duration: 116.738628ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T19:20:17.715487Z","caller":"traceutil/trace.go:171","msg":"trace[131625120] linearizableReadLoop","detail":"{readStateIndex:908; appliedIndex:907; }","duration":"459.932785ms","start":"2024-03-19T19:20:17.255531Z","end":"2024-03-19T19:20:17.715463Z","steps":["trace[131625120] 'read index received'  (duration: 459.724457ms)","trace[131625120] 'applied index is now lower than readState.Index'  (duration: 207.834µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T19:20:17.715757Z","caller":"traceutil/trace.go:171","msg":"trace[713936109] transaction","detail":"{read_only:false; response_revision:838; number_of_response:1; }","duration":"487.746977ms","start":"2024-03-19T19:20:17.228001Z","end":"2024-03-19T19:20:17.715748Z","steps":["trace[713936109] 'process raft request'  (duration: 487.296678ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:20:17.715885Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:20:17.227988Z","time spent":"487.830328ms","remote":"127.0.0.1:55846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:835 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-19T19:20:17.716005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.552494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14790"}
	{"level":"info","ts":"2024-03-19T19:20:17.716048Z","caller":"traceutil/trace.go:171","msg":"trace[548895004] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:838; }","duration":"214.62308ms","start":"2024-03-19T19:20:17.501415Z","end":"2024-03-19T19:20:17.716038Z","steps":["trace[548895004] 'agreement among raft nodes before linearized reading'  (duration: 214.414582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:20:17.716192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.657001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14790"}
	{"level":"info","ts":"2024-03-19T19:20:17.716216Z","caller":"traceutil/trace.go:171","msg":"trace[1938330408] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:838; }","duration":"460.702628ms","start":"2024-03-19T19:20:17.255507Z","end":"2024-03-19T19:20:17.71621Z","steps":["trace[1938330408] 'agreement among raft nodes before linearized reading'  (duration: 460.566496ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:20:17.716232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:20:17.255492Z","time spent":"460.736065ms","remote":"127.0.0.1:55856","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":14814,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-03-19T19:20:48.153036Z","caller":"traceutil/trace.go:171","msg":"trace[970289433] transaction","detail":"{read_only:false; response_revision:886; number_of_response:1; }","duration":"257.815439ms","start":"2024-03-19T19:20:47.895191Z","end":"2024-03-19T19:20:48.153006Z","steps":["trace[970289433] 'process raft request'  (duration: 257.701278ms)"],"step_count":1}
	
	
	==> etcd [4d43f6580d0710552fd406e22e4a70c23a7c0da4e32554a30604edaa4e1bd3e9] <==
	{"level":"info","ts":"2024-03-19T19:18:18.984481Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"97ba5874d4d591f6","initial-advertise-peer-urls":["https://192.168.39.193:2380"],"listen-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.193:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-19T19:18:20.744546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-19T19:18:20.744652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-19T19:18:20.744703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgPreVoteResp from 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-03-19T19:18:20.744744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T19:18:20.744769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-03-19T19:18:20.744799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became leader at term 3"}
	{"level":"info","ts":"2024-03-19T19:18:20.744824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-03-19T19:18:20.750858Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97ba5874d4d591f6","local-member-attributes":"{Name:functional-481771 ClientURLs:[https://192.168.39.193:2379]}","request-path":"/0/members/97ba5874d4d591f6/attributes","cluster-id":"9afeb12ac4c1a90a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T19:18:20.750927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T19:18:20.75105Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T19:18:20.751101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T19:18:20.751124Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T19:18:20.753099Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-19T19:18:20.753173Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	{"level":"info","ts":"2024-03-19T19:18:49.138109Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-19T19:18:49.138185Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-481771","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	{"level":"warn","ts":"2024-03-19T19:18:49.138259Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T19:18:49.138409Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T19:18:49.224635Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T19:18:49.224697Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T19:18:49.224765Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97ba5874d4d591f6","current-leader-member-id":"97ba5874d4d591f6"}
	{"level":"info","ts":"2024-03-19T19:18:49.228436Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-03-19T19:18:49.228658Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-03-19T19:18:49.228701Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-481771","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	
	
	==> kernel <==
	 19:23:11 up 6 min,  0 users,  load average: 1.33, 0.89, 0.42
	Linux functional-481771 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2fe6a90392e2636cb06983e6534b5f9e4e7a0a84553283bb45346723642765f] <==
	I0319 19:19:03.536502       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 19:19:03.536287       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 19:19:03.540011       1 cache.go:39] Caches are synced for autoregister controller
	E0319 19:19:03.541212       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 19:19:04.339118       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 19:19:05.084264       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 19:19:05.096611       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 19:19:05.175841       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 19:19:05.220988       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 19:19:05.230863       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 19:19:15.773173       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 19:19:15.873202       1 controller.go:624] quota admission added evaluator for: endpoints
	I0319 19:19:26.230094       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.128.27"}
	I0319 19:19:30.761332       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.111.69"}
	I0319 19:19:30.830989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0319 19:19:32.833152       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.23.142"}
	I0319 19:19:44.732077       1 trace.go:236] Trace[95080798]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3f5625cb-a3c2-47f6-b7f2-3d2fbec08293,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-yd5wgntuqo5erfapoyt73t7nni,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-yd5wgntuqo5erfapoyt73t7nni,user-agent:kube-apiserver/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PUT (19-Mar-2024 19:19:44.216) (total time: 515ms):
	Trace[95080798]: ["GuaranteedUpdate etcd3" audit-id:3f5625cb-a3c2-47f6-b7f2-3d2fbec08293,key:/leases/kube-system/apiserver-yd5wgntuqo5erfapoyt73t7nni,type:*coordination.Lease,resource:leases.coordination.k8s.io 515ms (19:19:44.216)
	Trace[95080798]:  ---"Txn call completed" 514ms (19:19:44.731)]
	Trace[95080798]: [515.9807ms] [515.9807ms] END
	E0319 19:19:56.018891       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.193:8441->192.168.39.1:45366: read: connection reset by peer
	I0319 19:19:56.248544       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.206.12"}
	I0319 19:20:05.617518       1 controller.go:624] quota admission added evaluator for: namespaces
	I0319 19:20:05.911728       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.120.154"}
	I0319 19:20:05.945131       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.110.169"}
	
	
	==> kube-controller-manager [7839c9edd38c103f5121bfa4eeb59337528ebb1ba6a250229b1c307cbad8dc86] <==
	I0319 19:18:34.451680       1 shared_informer.go:318] Caches are synced for service account
	I0319 19:18:34.459700       1 shared_informer.go:318] Caches are synced for persistent volume
	I0319 19:18:34.462173       1 shared_informer.go:318] Caches are synced for ephemeral
	I0319 19:18:34.463429       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0319 19:18:34.464667       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0319 19:18:34.469024       1 shared_informer.go:318] Caches are synced for daemon sets
	I0319 19:18:34.475463       1 shared_informer.go:318] Caches are synced for HPA
	I0319 19:18:34.479811       1 shared_informer.go:318] Caches are synced for GC
	I0319 19:18:34.481906       1 shared_informer.go:318] Caches are synced for node
	I0319 19:18:34.481963       1 range_allocator.go:174] "Sending events to api server"
	I0319 19:18:34.481980       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0319 19:18:34.481983       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0319 19:18:34.481988       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0319 19:18:34.484402       1 shared_informer.go:318] Caches are synced for PVC protection
	I0319 19:18:34.490541       1 shared_informer.go:318] Caches are synced for deployment
	I0319 19:18:34.492310       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0319 19:18:34.492532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="56.118µs"
	I0319 19:18:34.498818       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0319 19:18:34.585185       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 19:18:34.609699       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 19:18:34.615321       1 shared_informer.go:318] Caches are synced for stateful set
	I0319 19:18:34.624631       1 shared_informer.go:318] Caches are synced for disruption
	I0319 19:18:35.011710       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 19:18:35.011761       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0319 19:18:35.019054       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e63a1b085d4b0bda187d445f08f5bb09bae37437126fbcdd289bdb65e791e3a2] <==
	I0319 19:20:05.769171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.537332ms"
	E0319 19:20:05.769213       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0319 19:20:05.769481       1 event.go:376] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0319 19:20:05.771647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="9.376213ms"
	E0319 19:20:05.771693       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0319 19:20:05.771719       1 event.go:376] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0319 19:20:05.781770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.532148ms"
	I0319 19:20:05.782171       1 event.go:376] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0319 19:20:05.782862       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0319 19:20:05.799856       1 event.go:376] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-ww9l6"
	I0319 19:20:05.803262       1 event.go:376] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-sl7hw"
	I0319 19:20:05.830282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.337506ms"
	I0319 19:20:05.830705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="48.207022ms"
	I0319 19:20:05.852256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="21.491582ms"
	I0319 19:20:05.854491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.170158ms"
	I0319 19:20:05.876606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.196843ms"
	I0319 19:20:05.876769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.224µs"
	I0319 19:20:05.890425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="184.17µs"
	I0319 19:20:11.091555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="14.398243ms"
	I0319 19:20:11.096090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="110.876µs"
	I0319 19:20:18.146619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-d7447cc7f" duration="2.61351ms"
	I0319 19:20:19.147456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-d7447cc7f" duration="11.995516ms"
	I0319 19:20:19.147659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-d7447cc7f" duration="61.1µs"
	I0319 19:20:19.207884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.925584ms"
	I0319 19:20:19.208120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="98.976µs"
	
	
	==> kube-proxy [dc2bb6d8d4bf79c99fc37d89325c36e9c630ea49e4a3ab252db7134cd090be81] <==
	I0319 19:19:04.515938       1 server_others.go:72] "Using iptables proxy"
	I0319 19:19:04.536089       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0319 19:19:04.618675       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:19:04.618769       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:19:04.618808       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:19:04.621940       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:19:04.622124       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:19:04.622161       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:19:04.623907       1 config.go:188] "Starting service config controller"
	I0319 19:19:04.623940       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:19:04.623967       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:19:04.623971       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:19:04.624301       1 config.go:315] "Starting node config controller"
	I0319 19:19:04.624418       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:19:04.724395       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 19:19:04.724465       1 shared_informer.go:318] Caches are synced for node config
	I0319 19:19:04.724476       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [fe781f2ab111797470920154f60fae0b0bd4c68c2214df55d72d6af98911982c] <==
	I0319 19:18:23.655541       1 server_others.go:72] "Using iptables proxy"
	I0319 19:18:23.683983       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0319 19:18:23.767773       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:18:23.767793       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:18:23.767810       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:18:23.770659       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:18:23.770831       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:18:23.770984       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:18:23.771893       1 config.go:188] "Starting service config controller"
	I0319 19:18:23.771942       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:18:23.771971       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:18:23.771987       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:18:23.772522       1 config.go:315] "Starting node config controller"
	I0319 19:18:23.772563       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:18:23.872850       1 shared_informer.go:318] Caches are synced for node config
	I0319 19:18:23.872933       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:18:23.872957       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0c06cf4a71baedc6c17501a68ba76c8c5416f84d66a82063fe4daec34d34b96a] <==
	I0319 19:18:19.763190       1 serving.go:380] Generated self-signed cert in-memory
	W0319 19:18:22.037016       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0319 19:18:22.037135       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 19:18:22.037266       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0319 19:18:22.037293       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0319 19:18:22.095560       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 19:18:22.095710       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:18:22.105806       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 19:18:22.106674       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 19:18:22.112753       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 19:18:22.112867       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 19:18:22.208137       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 19:18:49.168805       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0319 19:18:49.168893       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 19:18:49.169017       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0319 19:18:49.172387       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [66b3981755cba439f7a565b89e36d07023cf3748e7a97d4e53f62350c82c2977] <==
	I0319 19:19:01.541924       1 serving.go:380] Generated self-signed cert in-memory
	W0319 19:19:03.401958       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0319 19:19:03.402039       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 19:19:03.402068       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0319 19:19:03.402092       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0319 19:19:03.449243       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 19:19:03.449437       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:19:03.453132       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 19:19:03.453237       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 19:19:03.455953       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 19:19:03.456029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 19:19:03.554322       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 19:21:00 functional-481771 kubelet[4478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:21:00 functional-481771 kubelet[4478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:21:00 functional-481771 kubelet[4478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.000569    4478 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod205df5af-0d30-49c2-9a59-f3f59acac431/crio-4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d: Error finding container 4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d: Status 404 returned error can't find the container with id 4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.003009    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/podef80d0d5f467e85879d22835470eeeab/crio-86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4: Error finding container 86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4: Status 404 returned error can't find the container with id 86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.003606    4478 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod1d54e4fb-68b7-432a-ad8d-32a232689162/crio-4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc: Error finding container 4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc: Status 404 returned error can't find the container with id 4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.003883    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod74290a2a2871a79f994e00ecc865513d/crio-429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5: Error finding container 429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5: Status 404 returned error can't find the container with id 429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.004470    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0/crio-d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e: Error finding container d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e: Status 404 returned error can't find the container with id d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.005271    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc72dfde4fa877784b1531a1a323f5fa1/crio-db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573: Error finding container db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573: Status 404 returned error can't find the container with id db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573
	Mar 19 19:22:00 functional-481771 kubelet[4478]: E0319 19:22:00.024793    4478 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:22:00 functional-481771 kubelet[4478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:22:00 functional-481771 kubelet[4478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:22:00 functional-481771 kubelet[4478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:22:00 functional-481771 kubelet[4478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:22:59 functional-481771 kubelet[4478]: E0319 19:22:59.997014    4478 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod205df5af-0d30-49c2-9a59-f3f59acac431/crio-4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d: Error finding container 4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d: Status 404 returned error can't find the container with id 4b1fc8dcd7271f638ef6325dea3d94fb42fd3dbb463a140c2dc97c0dcc4f365d
	Mar 19 19:22:59 functional-481771 kubelet[4478]: E0319 19:22:59.997560    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod4e81ba88-94cb-4ef1-8ad5-74649c3a9dc0/crio-d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e: Error finding container d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e: Status 404 returned error can't find the container with id d156c7bae59a7827bb8d201a72ad9a4346a7063fa4c0f86f8900233c657bc36e
	Mar 19 19:22:59 functional-481771 kubelet[4478]: E0319 19:22:59.998076    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/podef80d0d5f467e85879d22835470eeeab/crio-86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4: Error finding container 86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4: Status 404 returned error can't find the container with id 86329c6d0a0d7b9776da7e3d62d9ac16b28535c10c96ac5297a07a59173e14c4
	Mar 19 19:22:59 functional-481771 kubelet[4478]: E0319 19:22:59.998509    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod74290a2a2871a79f994e00ecc865513d/crio-429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5: Error finding container 429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5: Status 404 returned error can't find the container with id 429e93b456d403d4de0cf0caebf0614e04419a44cc583a613b7fe8c7148b0fd5
	Mar 19 19:22:59 functional-481771 kubelet[4478]: E0319 19:22:59.998942    4478 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod1d54e4fb-68b7-432a-ad8d-32a232689162/crio-4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc: Error finding container 4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc: Status 404 returned error can't find the container with id 4bea36ad52cde423cea9f8e358c87e7f5b23732d97a8b09d9584d102dad128dc
	Mar 19 19:22:59 functional-481771 kubelet[4478]: E0319 19:22:59.999264    4478 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc72dfde4fa877784b1531a1a323f5fa1/crio-db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573: Error finding container db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573: Status 404 returned error can't find the container with id db3156b6f49b49d6a49ea6e32e63a20468b66c1fd728c7db4b461296b4732573
	Mar 19 19:23:00 functional-481771 kubelet[4478]: E0319 19:23:00.022790    4478 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:23:00 functional-481771 kubelet[4478]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:23:00 functional-481771 kubelet[4478]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:23:00 functional-481771 kubelet[4478]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:23:00 functional-481771 kubelet[4478]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [f460620c42ff0e656647be73114d543b03901c81ab9f992d493c8f0a7469d311] <==
	2024/03/19 19:20:18 Using namespace: kubernetes-dashboard
	2024/03/19 19:20:18 Using in-cluster config to connect to apiserver
	2024/03/19 19:20:18 Using secret token for csrf signing
	2024/03/19 19:20:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/19 19:20:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/19 19:20:18 Successful initial request to the apiserver, version: v1.29.3
	2024/03/19 19:20:18 Generating JWE encryption key
	2024/03/19 19:20:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/19 19:20:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/19 19:20:18 Initializing JWE encryption key from synchronized object
	2024/03/19 19:20:18 Creating in-cluster Sidecar client
	2024/03/19 19:20:18 Successful request to sidecar
	2024/03/19 19:20:18 Serving insecurely on HTTP port: 9090
	2024/03/19 19:20:18 Starting overwatch
	
	
	==> storage-provisioner [13de1221d2ab6198bdb2e2e38298d8a04a4a92e88d534e4a698015584edd6a59] <==
	I0319 19:18:23.615494       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 19:18:23.633047       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 19:18:23.633112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 19:18:23.652065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 19:18:23.652120       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7309b346-5a65-4d0f-9850-421ecc9f7a47", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-481771_9c99aa10-eb1c-4de0-98f8-7c665f50af54 became leader
	I0319 19:18:23.652217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-481771_9c99aa10-eb1c-4de0-98f8-7c665f50af54!
	I0319 19:18:23.758492       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-481771_9c99aa10-eb1c-4de0-98f8-7c665f50af54!
	
	
	==> storage-provisioner [26b064ab99a09bb449c1ca27b71c172ad4a6753a8e971f8ffdebebea3426fabc] <==
	I0319 19:19:04.468935       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 19:19:04.492763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 19:19:04.492827       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 19:19:21.898520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 19:19:21.899115       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7309b346-5a65-4d0f-9850-421ecc9f7a47", APIVersion:"v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-481771_1f62929a-c8ae-4a3e-b72b-4e55e04db908 became leader
	I0319 19:19:21.899195       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-481771_1f62929a-c8ae-4a3e-b72b-4e55e04db908!
	I0319 19:19:22.000226       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-481771_1f62929a-c8ae-4a3e-b72b-4e55e04db908!
	I0319 19:19:38.453115       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0319 19:19:38.453274       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    0de34311-6567-406d-b5cb-52aa6a9b98f2 345 0 2024-03-19 19:17:58 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-19 19:17:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-72fe85e2-2cfd-4023-9e2b-f649dd0bead3 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  72fe85e2-2cfd-4023-9e2b-f649dd0bead3 665 0 2024-03-19 19:19:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-19 19:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-19 19:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0319 19:19:38.453785       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-72fe85e2-2cfd-4023-9e2b-f649dd0bead3" provisioned
	I0319 19:19:38.453801       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0319 19:19:38.453809       1 volume_store.go:212] Trying to save persistentvolume "pvc-72fe85e2-2cfd-4023-9e2b-f649dd0bead3"
	I0319 19:19:38.459782       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"72fe85e2-2cfd-4023-9e2b-f649dd0bead3", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0319 19:19:38.479300       1 volume_store.go:219] persistentvolume "pvc-72fe85e2-2cfd-4023-9e2b-f649dd0bead3" saved
	I0319 19:19:38.490594       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"72fe85e2-2cfd-4023-9e2b-f649dd0bead3", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-72fe85e2-2cfd-4023-9e2b-f649dd0bead3
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-481771 -n functional-481771
helpers_test.go:261: (dbg) Run:  kubectl --context functional-481771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-481771 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-481771 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-481771/192.168.39.193
	Start Time:       Tue, 19 Mar 2024 19:20:01 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f976383c6c3693e052a99139ae646f64efde7026edcf2048b80d06acfe98b40b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 19 Mar 2024 19:20:05 +0000
	      Finished:     Tue, 19 Mar 2024 19:20:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wf876 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wf876:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m10s  default-scheduler  Successfully assigned default/busybox-mount to functional-481771
	  Normal  Pulling    3m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m7s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.414s (3.414s including waiting)
	  Normal  Created    3m7s   kubelet            Created container mount-munger
	  Normal  Started    3m7s   kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-481771/192.168.39.193
	Start Time:       Tue, 19 Mar 2024 19:20:09 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4ffql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4ffql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m2s   default-scheduler  Successfully assigned default/sp-pod to functional-481771
	  Normal  Pulling    3m2s   kubelet            Pulling image "docker.io/nginx"
	  Normal  Pulled     2m53s  kubelet            Successfully pulled image "docker.io/nginx" in 1.042s (8.844s including waiting)

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (220.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 node stop m02 -v=7 --alsologtostderr
E0319 19:29:30.843852   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:29:58.527032   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:30:04.835002   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.475966539s)

                                                
                                                
-- stdout --
	* Stopping node "ha-218762-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:28:09.682036   31145 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:28:09.682172   31145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:28:09.682182   31145 out.go:304] Setting ErrFile to fd 2...
	I0319 19:28:09.682187   31145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:28:09.682405   31145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:28:09.683034   31145 mustload.go:65] Loading cluster: ha-218762
	I0319 19:28:09.684103   31145 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:28:09.684120   31145 stop.go:39] StopHost: ha-218762-m02
	I0319 19:28:09.684706   31145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:28:09.684746   31145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:28:09.700302   31145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39859
	I0319 19:28:09.700769   31145 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:28:09.701398   31145 main.go:141] libmachine: Using API Version  1
	I0319 19:28:09.701432   31145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:28:09.701756   31145 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:28:09.703919   31145 out.go:177] * Stopping node "ha-218762-m02"  ...
	I0319 19:28:09.705295   31145 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 19:28:09.705341   31145 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:28:09.705573   31145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 19:28:09.705601   31145 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:28:09.708904   31145 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:28:09.709340   31145 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:28:09.709375   31145 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:28:09.709540   31145 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:28:09.709706   31145 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:28:09.709827   31145 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:28:09.709960   31145 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:28:09.798666   31145 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 19:28:09.855961   31145 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 19:28:09.914848   31145 main.go:141] libmachine: Stopping "ha-218762-m02"...
	I0319 19:28:09.914881   31145 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:28:09.916433   31145 main.go:141] libmachine: (ha-218762-m02) Calling .Stop
	I0319 19:28:09.919985   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 0/120
	I0319 19:28:10.921211   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 1/120
	I0319 19:28:11.922571   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 2/120
	I0319 19:28:12.924538   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 3/120
	I0319 19:28:13.925838   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 4/120
	I0319 19:28:14.927612   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 5/120
	I0319 19:28:15.928900   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 6/120
	I0319 19:28:16.930701   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 7/120
	I0319 19:28:17.932186   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 8/120
	I0319 19:28:18.933453   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 9/120
	I0319 19:28:19.935722   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 10/120
	I0319 19:28:20.937891   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 11/120
	I0319 19:28:21.939725   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 12/120
	I0319 19:28:22.940936   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 13/120
	I0319 19:28:23.942771   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 14/120
	I0319 19:28:24.944715   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 15/120
	I0319 19:28:25.946566   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 16/120
	I0319 19:28:26.948191   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 17/120
	I0319 19:28:27.949479   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 18/120
	I0319 19:28:28.950859   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 19/120
	I0319 19:28:29.952939   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 20/120
	I0319 19:28:30.954743   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 21/120
	I0319 19:28:31.956722   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 22/120
	I0319 19:28:32.958887   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 23/120
	I0319 19:28:33.960557   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 24/120
	I0319 19:28:34.961730   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 25/120
	I0319 19:28:35.963019   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 26/120
	I0319 19:28:36.964961   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 27/120
	I0319 19:28:37.966933   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 28/120
	I0319 19:28:38.968033   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 29/120
	I0319 19:28:39.969915   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 30/120
	I0319 19:28:40.971487   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 31/120
	I0319 19:28:41.973083   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 32/120
	I0319 19:28:42.974965   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 33/120
	I0319 19:28:43.976288   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 34/120
	I0319 19:28:44.977676   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 35/120
	I0319 19:28:45.978929   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 36/120
	I0319 19:28:46.980300   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 37/120
	I0319 19:28:47.981717   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 38/120
	I0319 19:28:48.983109   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 39/120
	I0319 19:28:49.984669   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 40/120
	I0319 19:28:50.985826   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 41/120
	I0319 19:28:51.987208   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 42/120
	I0319 19:28:52.988596   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 43/120
	I0319 19:28:53.989836   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 44/120
	I0319 19:28:54.991490   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 45/120
	I0319 19:28:55.992807   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 46/120
	I0319 19:28:56.994758   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 47/120
	I0319 19:28:57.996074   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 48/120
	I0319 19:28:58.997312   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 49/120
	I0319 19:28:59.998698   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 50/120
	I0319 19:29:01.000331   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 51/120
	I0319 19:29:02.001650   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 52/120
	I0319 19:29:03.002976   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 53/120
	I0319 19:29:04.004311   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 54/120
	I0319 19:29:05.006426   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 55/120
	I0319 19:29:06.008055   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 56/120
	I0319 19:29:07.009506   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 57/120
	I0319 19:29:08.010875   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 58/120
	I0319 19:29:09.012295   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 59/120
	I0319 19:29:10.014261   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 60/120
	I0319 19:29:11.015740   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 61/120
	I0319 19:29:12.016945   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 62/120
	I0319 19:29:13.018286   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 63/120
	I0319 19:29:14.019684   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 64/120
	I0319 19:29:15.020901   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 65/120
	I0319 19:29:16.022718   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 66/120
	I0319 19:29:17.024084   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 67/120
	I0319 19:29:18.025545   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 68/120
	I0319 19:29:19.026848   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 69/120
	I0319 19:29:20.028809   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 70/120
	I0319 19:29:21.030631   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 71/120
	I0319 19:29:22.032833   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 72/120
	I0319 19:29:23.034285   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 73/120
	I0319 19:29:24.036156   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 74/120
	I0319 19:29:25.037966   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 75/120
	I0319 19:29:26.039598   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 76/120
	I0319 19:29:27.040858   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 77/120
	I0319 19:29:28.042715   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 78/120
	I0319 19:29:29.043948   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 79/120
	I0319 19:29:30.045789   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 80/120
	I0319 19:29:31.047039   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 81/120
	I0319 19:29:32.048325   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 82/120
	I0319 19:29:33.049641   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 83/120
	I0319 19:29:34.050793   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 84/120
	I0319 19:29:35.052677   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 85/120
	I0319 19:29:36.054810   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 86/120
	I0319 19:29:37.057009   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 87/120
	I0319 19:29:38.058702   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 88/120
	I0319 19:29:39.059989   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 89/120
	I0319 19:29:40.062158   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 90/120
	I0319 19:29:41.063891   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 91/120
	I0319 19:29:42.065240   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 92/120
	I0319 19:29:43.066648   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 93/120
	I0319 19:29:44.067997   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 94/120
	I0319 19:29:45.069454   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 95/120
	I0319 19:29:46.070842   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 96/120
	I0319 19:29:47.072083   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 97/120
	I0319 19:29:48.073348   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 98/120
	I0319 19:29:49.074554   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 99/120
	I0319 19:29:50.076815   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 100/120
	I0319 19:29:51.078768   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 101/120
	I0319 19:29:52.080086   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 102/120
	I0319 19:29:53.081444   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 103/120
	I0319 19:29:54.082827   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 104/120
	I0319 19:29:55.084292   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 105/120
	I0319 19:29:56.085624   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 106/120
	I0319 19:29:57.087050   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 107/120
	I0319 19:29:58.089314   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 108/120
	I0319 19:29:59.090511   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 109/120
	I0319 19:30:00.091928   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 110/120
	I0319 19:30:01.093924   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 111/120
	I0319 19:30:02.095272   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 112/120
	I0319 19:30:03.096541   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 113/120
	I0319 19:30:04.097859   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 114/120
	I0319 19:30:05.100046   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 115/120
	I0319 19:30:06.101244   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 116/120
	I0319 19:30:07.102585   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 117/120
	I0319 19:30:08.104037   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 118/120
	I0319 19:30:09.105239   31145 main.go:141] libmachine: (ha-218762-m02) Waiting for machine to stop 119/120
	I0319 19:30:10.106294   31145 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0319 19:30:10.106416   31145 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-218762 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (19.183899081s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:30:10.163723   31452 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:30:10.163832   31452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:10.163841   31452 out.go:304] Setting ErrFile to fd 2...
	I0319 19:30:10.163845   31452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:10.164073   31452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:30:10.164279   31452 out.go:298] Setting JSON to false
	I0319 19:30:10.164310   31452 mustload.go:65] Loading cluster: ha-218762
	I0319 19:30:10.164368   31452 notify.go:220] Checking for updates...
	I0319 19:30:10.164728   31452 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:30:10.164744   31452 status.go:255] checking status of ha-218762 ...
	I0319 19:30:10.165126   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:10.165182   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:10.182927   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0319 19:30:10.183300   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:10.183866   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:10.183885   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:10.184340   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:10.184544   31452 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:30:10.186418   31452 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:30:10.186435   31452 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:10.186715   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:10.186743   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:10.201242   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0319 19:30:10.201579   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:10.202031   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:10.202058   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:10.202328   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:10.202548   31452 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:30:10.205169   31452 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:10.205606   31452 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:10.205628   31452 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:10.205751   31452 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:10.206116   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:10.206158   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:10.221239   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
	I0319 19:30:10.221755   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:10.222305   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:10.222328   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:10.222669   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:10.222846   31452 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:30:10.223043   31452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:10.223073   31452 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:30:10.226173   31452 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:10.226642   31452 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:10.226673   31452 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:10.226850   31452 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:30:10.227058   31452 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:30:10.227219   31452 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:30:10.227339   31452 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:30:10.316058   31452 ssh_runner.go:195] Run: systemctl --version
	I0319 19:30:10.323656   31452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:10.342771   31452 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:10.342795   31452 api_server.go:166] Checking apiserver status ...
	I0319 19:30:10.342831   31452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:10.359193   31452 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:30:10.369130   31452 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:10.369190   31452 ssh_runner.go:195] Run: ls
	I0319 19:30:10.373903   31452 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:10.378733   31452 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:10.378752   31452 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:30:10.378761   31452 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:10.378776   31452 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:30:10.379061   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:10.379104   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:10.393147   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0319 19:30:10.393481   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:10.393880   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:10.393895   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:10.394235   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:10.394421   31452 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:30:10.395925   31452 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:30:10.395940   31452 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:10.396248   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:10.396297   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:10.410424   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44541
	I0319 19:30:10.410787   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:10.411247   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:10.411261   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:10.411613   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:10.411813   31452 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:30:10.414270   31452 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:10.414714   31452 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:10.414741   31452 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:10.414863   31452 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:10.415586   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:10.415681   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:10.431658   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0319 19:30:10.432115   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:10.432663   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:10.432695   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:10.433029   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:10.433238   31452 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:30:10.433426   31452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:10.433448   31452 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:30:10.436238   31452 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:10.436724   31452 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:10.436746   31452 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:10.436911   31452 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:30:10.437061   31452 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:30:10.437181   31452 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:30:10.437313   31452 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	W0319 19:30:28.916483   31452 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:28.916612   31452 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	E0319 19:30:28.916637   31452 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:28.916657   31452 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:30:28.916685   31452 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:28.916696   31452 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:30:28.917140   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:28.917190   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:28.931670   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I0319 19:30:28.932064   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:28.932576   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:28.932595   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:28.932881   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:28.933094   31452 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:30:28.934811   31452 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:30:28.934826   31452 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:28.935201   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:28.935244   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:28.950949   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39283
	I0319 19:30:28.951368   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:28.951844   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:28.951866   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:28.952186   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:28.952414   31452 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:30:28.955037   31452 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:28.955418   31452 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:28.955444   31452 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:28.955554   31452 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:28.955830   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:28.955863   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:28.969896   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0319 19:30:28.970270   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:28.970656   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:28.970676   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:28.971004   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:28.971203   31452 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:30:28.971392   31452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:28.971414   31452 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:30:28.973914   31452 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:28.974306   31452 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:28.974334   31452 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:28.974452   31452 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:30:28.974596   31452 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:30:28.974766   31452 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:30:28.974916   31452 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:30:29.059297   31452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:29.079757   31452 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:29.079781   31452 api_server.go:166] Checking apiserver status ...
	I0319 19:30:29.079826   31452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:29.095706   31452 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:30:29.107695   31452 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:29.107738   31452 ssh_runner.go:195] Run: ls
	I0319 19:30:29.118474   31452 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:29.122864   31452 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:29.122883   31452 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:30:29.122894   31452 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:29.122914   31452 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:30:29.123270   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:29.123309   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:29.137959   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0319 19:30:29.138465   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:29.138955   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:29.138986   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:29.139329   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:29.139516   31452 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:30:29.141005   31452 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:30:29.141020   31452 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:29.141306   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:29.141354   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:29.155575   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0319 19:30:29.155886   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:29.156322   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:29.156347   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:29.156652   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:29.156819   31452 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:30:29.159445   31452 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:29.159946   31452 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:29.159968   31452 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:29.160147   31452 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:29.160555   31452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:29.160595   31452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:29.175323   31452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0319 19:30:29.175723   31452 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:29.176138   31452 main.go:141] libmachine: Using API Version  1
	I0319 19:30:29.176156   31452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:29.176519   31452 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:29.176705   31452 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:30:29.176891   31452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:29.176917   31452 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:30:29.179664   31452 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:29.180056   31452 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:29.180080   31452 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:29.180214   31452 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:30:29.180390   31452 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:30:29.180561   31452 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:30:29.180727   31452 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:30:29.270493   31452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:29.289848   31452 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-218762 -n ha-218762
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-218762 logs -n 25: (1.548120419s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m03_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m04 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp testdata/cp-test.txt                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m04_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03:/home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m03 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-218762 node stop m02 -v=7                                                     | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:23:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:23:13.578354   27348 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:23:13.578457   27348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:23:13.578468   27348 out.go:304] Setting ErrFile to fd 2...
	I0319 19:23:13.578472   27348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:23:13.578647   27348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:23:13.579240   27348 out.go:298] Setting JSON to false
	I0319 19:23:13.580101   27348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3892,"bootTime":1710872302,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:23:13.580155   27348 start.go:139] virtualization: kvm guest
	I0319 19:23:13.582378   27348 out.go:177] * [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:23:13.583824   27348 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:23:13.583830   27348 notify.go:220] Checking for updates...
	I0319 19:23:13.585154   27348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:23:13.586458   27348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:23:13.587615   27348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:23:13.588969   27348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:23:13.590067   27348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:23:13.591295   27348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:23:13.624196   27348 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 19:23:13.625498   27348 start.go:297] selected driver: kvm2
	I0319 19:23:13.625510   27348 start.go:901] validating driver "kvm2" against <nil>
	I0319 19:23:13.625520   27348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:23:13.626162   27348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:23:13.626226   27348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:23:13.640062   27348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:23:13.640098   27348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 19:23:13.640328   27348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:23:13.640399   27348 cni.go:84] Creating CNI manager for ""
	I0319 19:23:13.640422   27348 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0319 19:23:13.640432   27348 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0319 19:23:13.640507   27348 start.go:340] cluster config:
	{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0319 19:23:13.640644   27348 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:23:13.642511   27348 out.go:177] * Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	I0319 19:23:13.643758   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:23:13.643785   27348 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:23:13.643790   27348 cache.go:56] Caching tarball of preloaded images
	I0319 19:23:13.643870   27348 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:23:13.643884   27348 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:23:13.644148   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:23:13.644166   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json: {Name:mka9a0c31e052f0341976073e8a572d7e1505326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:13.644303   27348 start.go:360] acquireMachinesLock for ha-218762: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:23:13.644337   27348 start.go:364] duration metric: took 19.537µs to acquireMachinesLock for "ha-218762"
	I0319 19:23:13.644354   27348 start.go:93] Provisioning new machine with config: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:23:13.644414   27348 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 19:23:13.645899   27348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 19:23:13.646009   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:23:13.646048   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:23:13.659854   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0319 19:23:13.660295   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:23:13.660824   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:23:13.660846   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:23:13.661137   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:23:13.661286   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:13.661439   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:13.661551   27348 start.go:159] libmachine.API.Create for "ha-218762" (driver="kvm2")
	I0319 19:23:13.661577   27348 client.go:168] LocalClient.Create starting
	I0319 19:23:13.661606   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:23:13.661643   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:23:13.661656   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:23:13.661706   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:23:13.661723   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:23:13.661734   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:23:13.661758   27348 main.go:141] libmachine: Running pre-create checks...
	I0319 19:23:13.661771   27348 main.go:141] libmachine: (ha-218762) Calling .PreCreateCheck
	I0319 19:23:13.662091   27348 main.go:141] libmachine: (ha-218762) Calling .GetConfigRaw
	I0319 19:23:13.662411   27348 main.go:141] libmachine: Creating machine...
	I0319 19:23:13.662423   27348 main.go:141] libmachine: (ha-218762) Calling .Create
	I0319 19:23:13.662532   27348 main.go:141] libmachine: (ha-218762) Creating KVM machine...
	I0319 19:23:13.663650   27348 main.go:141] libmachine: (ha-218762) DBG | found existing default KVM network
	I0319 19:23:13.664218   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:13.664103   27371 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0319 19:23:13.664252   27348 main.go:141] libmachine: (ha-218762) DBG | created network xml: 
	I0319 19:23:13.664303   27348 main.go:141] libmachine: (ha-218762) DBG | <network>
	I0319 19:23:13.664318   27348 main.go:141] libmachine: (ha-218762) DBG |   <name>mk-ha-218762</name>
	I0319 19:23:13.664328   27348 main.go:141] libmachine: (ha-218762) DBG |   <dns enable='no'/>
	I0319 19:23:13.664339   27348 main.go:141] libmachine: (ha-218762) DBG |   
	I0319 19:23:13.664350   27348 main.go:141] libmachine: (ha-218762) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0319 19:23:13.664364   27348 main.go:141] libmachine: (ha-218762) DBG |     <dhcp>
	I0319 19:23:13.664374   27348 main.go:141] libmachine: (ha-218762) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0319 19:23:13.664387   27348 main.go:141] libmachine: (ha-218762) DBG |     </dhcp>
	I0319 19:23:13.664400   27348 main.go:141] libmachine: (ha-218762) DBG |   </ip>
	I0319 19:23:13.664422   27348 main.go:141] libmachine: (ha-218762) DBG |   
	I0319 19:23:13.664446   27348 main.go:141] libmachine: (ha-218762) DBG | </network>
	I0319 19:23:13.664460   27348 main.go:141] libmachine: (ha-218762) DBG | 
	I0319 19:23:13.669054   27348 main.go:141] libmachine: (ha-218762) DBG | trying to create private KVM network mk-ha-218762 192.168.39.0/24...
	I0319 19:23:13.731711   27348 main.go:141] libmachine: (ha-218762) DBG | private KVM network mk-ha-218762 192.168.39.0/24 created
	I0319 19:23:13.731745   27348 main.go:141] libmachine: (ha-218762) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762 ...
	I0319 19:23:13.731776   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:13.731677   27371 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:23:13.731794   27348 main.go:141] libmachine: (ha-218762) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:23:13.731822   27348 main.go:141] libmachine: (ha-218762) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:23:13.954575   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:13.954473   27371 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa...
	I0319 19:23:14.183743   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:14.183531   27371 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/ha-218762.rawdisk...
	I0319 19:23:14.183827   27348 main.go:141] libmachine: (ha-218762) DBG | Writing magic tar header
	I0319 19:23:14.183846   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762 (perms=drwx------)
	I0319 19:23:14.183894   27348 main.go:141] libmachine: (ha-218762) DBG | Writing SSH key tar header
	I0319 19:23:14.183926   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:14.183675   27371 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762 ...
	I0319 19:23:14.183940   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:23:14.183953   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:23:14.183964   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762
	I0319 19:23:14.183975   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:23:14.183987   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:23:14.183997   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:23:14.184011   27348 main.go:141] libmachine: (ha-218762) Creating domain...
	I0319 19:23:14.184027   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:23:14.184039   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:23:14.184060   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:23:14.184080   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:23:14.184092   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:23:14.184103   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home
	I0319 19:23:14.184113   27348 main.go:141] libmachine: (ha-218762) DBG | Skipping /home - not owner
	I0319 19:23:14.185044   27348 main.go:141] libmachine: (ha-218762) define libvirt domain using xml: 
	I0319 19:23:14.185068   27348 main.go:141] libmachine: (ha-218762) <domain type='kvm'>
	I0319 19:23:14.185079   27348 main.go:141] libmachine: (ha-218762)   <name>ha-218762</name>
	I0319 19:23:14.185086   27348 main.go:141] libmachine: (ha-218762)   <memory unit='MiB'>2200</memory>
	I0319 19:23:14.185109   27348 main.go:141] libmachine: (ha-218762)   <vcpu>2</vcpu>
	I0319 19:23:14.185121   27348 main.go:141] libmachine: (ha-218762)   <features>
	I0319 19:23:14.185127   27348 main.go:141] libmachine: (ha-218762)     <acpi/>
	I0319 19:23:14.185140   27348 main.go:141] libmachine: (ha-218762)     <apic/>
	I0319 19:23:14.185152   27348 main.go:141] libmachine: (ha-218762)     <pae/>
	I0319 19:23:14.185165   27348 main.go:141] libmachine: (ha-218762)     
	I0319 19:23:14.185175   27348 main.go:141] libmachine: (ha-218762)   </features>
	I0319 19:23:14.185202   27348 main.go:141] libmachine: (ha-218762)   <cpu mode='host-passthrough'>
	I0319 19:23:14.185215   27348 main.go:141] libmachine: (ha-218762)   
	I0319 19:23:14.185221   27348 main.go:141] libmachine: (ha-218762)   </cpu>
	I0319 19:23:14.185225   27348 main.go:141] libmachine: (ha-218762)   <os>
	I0319 19:23:14.185233   27348 main.go:141] libmachine: (ha-218762)     <type>hvm</type>
	I0319 19:23:14.185237   27348 main.go:141] libmachine: (ha-218762)     <boot dev='cdrom'/>
	I0319 19:23:14.185242   27348 main.go:141] libmachine: (ha-218762)     <boot dev='hd'/>
	I0319 19:23:14.185250   27348 main.go:141] libmachine: (ha-218762)     <bootmenu enable='no'/>
	I0319 19:23:14.185254   27348 main.go:141] libmachine: (ha-218762)   </os>
	I0319 19:23:14.185259   27348 main.go:141] libmachine: (ha-218762)   <devices>
	I0319 19:23:14.185265   27348 main.go:141] libmachine: (ha-218762)     <disk type='file' device='cdrom'>
	I0319 19:23:14.185271   27348 main.go:141] libmachine: (ha-218762)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/boot2docker.iso'/>
	I0319 19:23:14.185276   27348 main.go:141] libmachine: (ha-218762)       <target dev='hdc' bus='scsi'/>
	I0319 19:23:14.185283   27348 main.go:141] libmachine: (ha-218762)       <readonly/>
	I0319 19:23:14.185288   27348 main.go:141] libmachine: (ha-218762)     </disk>
	I0319 19:23:14.185292   27348 main.go:141] libmachine: (ha-218762)     <disk type='file' device='disk'>
	I0319 19:23:14.185305   27348 main.go:141] libmachine: (ha-218762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:23:14.185311   27348 main.go:141] libmachine: (ha-218762)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/ha-218762.rawdisk'/>
	I0319 19:23:14.185316   27348 main.go:141] libmachine: (ha-218762)       <target dev='hda' bus='virtio'/>
	I0319 19:23:14.185321   27348 main.go:141] libmachine: (ha-218762)     </disk>
	I0319 19:23:14.185326   27348 main.go:141] libmachine: (ha-218762)     <interface type='network'>
	I0319 19:23:14.185330   27348 main.go:141] libmachine: (ha-218762)       <source network='mk-ha-218762'/>
	I0319 19:23:14.185338   27348 main.go:141] libmachine: (ha-218762)       <model type='virtio'/>
	I0319 19:23:14.185342   27348 main.go:141] libmachine: (ha-218762)     </interface>
	I0319 19:23:14.185347   27348 main.go:141] libmachine: (ha-218762)     <interface type='network'>
	I0319 19:23:14.185351   27348 main.go:141] libmachine: (ha-218762)       <source network='default'/>
	I0319 19:23:14.185361   27348 main.go:141] libmachine: (ha-218762)       <model type='virtio'/>
	I0319 19:23:14.185366   27348 main.go:141] libmachine: (ha-218762)     </interface>
	I0319 19:23:14.185370   27348 main.go:141] libmachine: (ha-218762)     <serial type='pty'>
	I0319 19:23:14.185374   27348 main.go:141] libmachine: (ha-218762)       <target port='0'/>
	I0319 19:23:14.185379   27348 main.go:141] libmachine: (ha-218762)     </serial>
	I0319 19:23:14.185383   27348 main.go:141] libmachine: (ha-218762)     <console type='pty'>
	I0319 19:23:14.185388   27348 main.go:141] libmachine: (ha-218762)       <target type='serial' port='0'/>
	I0319 19:23:14.185402   27348 main.go:141] libmachine: (ha-218762)     </console>
	I0319 19:23:14.185424   27348 main.go:141] libmachine: (ha-218762)     <rng model='virtio'>
	I0319 19:23:14.185465   27348 main.go:141] libmachine: (ha-218762)       <backend model='random'>/dev/random</backend>
	I0319 19:23:14.185478   27348 main.go:141] libmachine: (ha-218762)     </rng>
	I0319 19:23:14.185487   27348 main.go:141] libmachine: (ha-218762)     
	I0319 19:23:14.185493   27348 main.go:141] libmachine: (ha-218762)     
	I0319 19:23:14.185503   27348 main.go:141] libmachine: (ha-218762)   </devices>
	I0319 19:23:14.185511   27348 main.go:141] libmachine: (ha-218762) </domain>
	I0319 19:23:14.185522   27348 main.go:141] libmachine: (ha-218762) 
	I0319 19:23:14.189592   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:f0:c3:61 in network default
	I0319 19:23:14.190109   27348 main.go:141] libmachine: (ha-218762) Ensuring networks are active...
	I0319 19:23:14.190128   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:14.190681   27348 main.go:141] libmachine: (ha-218762) Ensuring network default is active
	I0319 19:23:14.190947   27348 main.go:141] libmachine: (ha-218762) Ensuring network mk-ha-218762 is active
	I0319 19:23:14.191375   27348 main.go:141] libmachine: (ha-218762) Getting domain xml...
	I0319 19:23:14.192006   27348 main.go:141] libmachine: (ha-218762) Creating domain...
	I0319 19:23:15.345251   27348 main.go:141] libmachine: (ha-218762) Waiting to get IP...
	I0319 19:23:15.345976   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:15.346360   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:15.346403   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:15.346356   27371 retry.go:31] will retry after 309.498905ms: waiting for machine to come up
	I0319 19:23:15.657770   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:15.658192   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:15.658216   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:15.658149   27371 retry.go:31] will retry after 276.733838ms: waiting for machine to come up
	I0319 19:23:15.936591   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:15.937000   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:15.937030   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:15.936953   27371 retry.go:31] will retry after 358.761144ms: waiting for machine to come up
	I0319 19:23:16.297370   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:16.297822   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:16.297845   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:16.297775   27371 retry.go:31] will retry after 555.023033ms: waiting for machine to come up
	I0319 19:23:16.854501   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:16.854954   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:16.854985   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:16.854900   27371 retry.go:31] will retry after 485.696214ms: waiting for machine to come up
	I0319 19:23:17.342321   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:17.342821   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:17.342848   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:17.342788   27371 retry.go:31] will retry after 799.596882ms: waiting for machine to come up
	I0319 19:23:18.143605   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:18.144020   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:18.144053   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:18.143980   27371 retry.go:31] will retry after 779.78661ms: waiting for machine to come up
	I0319 19:23:18.925208   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:18.925603   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:18.925632   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:18.925542   27371 retry.go:31] will retry after 1.214561373s: waiting for machine to come up
	I0319 19:23:20.141785   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:20.142140   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:20.142160   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:20.142111   27371 retry.go:31] will retry after 1.178568266s: waiting for machine to come up
	I0319 19:23:21.321878   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:21.322139   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:21.322166   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:21.322104   27371 retry.go:31] will retry after 1.566328576s: waiting for machine to come up
	I0319 19:23:22.889584   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:22.890005   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:22.890057   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:22.889947   27371 retry.go:31] will retry after 1.840325389s: waiting for machine to come up
	I0319 19:23:24.731419   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:24.731835   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:24.731863   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:24.731800   27371 retry.go:31] will retry after 3.175644061s: waiting for machine to come up
	I0319 19:23:27.909404   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:27.909716   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:27.909739   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:27.909665   27371 retry.go:31] will retry after 3.654470598s: waiting for machine to come up
	I0319 19:23:31.567747   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:31.568069   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:31.568089   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:31.568041   27371 retry.go:31] will retry after 3.714075051s: waiting for machine to come up
	I0319 19:23:35.283120   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.283555   27348 main.go:141] libmachine: (ha-218762) Found IP for machine: 192.168.39.200
	I0319 19:23:35.283574   27348 main.go:141] libmachine: (ha-218762) Reserving static IP address...
	I0319 19:23:35.283583   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.283898   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find host DHCP lease matching {name: "ha-218762", mac: "52:54:00:2b:ad:c2", ip: "192.168.39.200"} in network mk-ha-218762
	I0319 19:23:35.350942   27348 main.go:141] libmachine: (ha-218762) DBG | Getting to WaitForSSH function...
	I0319 19:23:35.350970   27348 main.go:141] libmachine: (ha-218762) Reserved static IP address: 192.168.39.200
	I0319 19:23:35.350982   27348 main.go:141] libmachine: (ha-218762) Waiting for SSH to be available...
	I0319 19:23:35.353361   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.353792   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.353814   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.353993   27348 main.go:141] libmachine: (ha-218762) DBG | Using SSH client type: external
	I0319 19:23:35.354015   27348 main.go:141] libmachine: (ha-218762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa (-rw-------)
	I0319 19:23:35.354056   27348 main.go:141] libmachine: (ha-218762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:23:35.354066   27348 main.go:141] libmachine: (ha-218762) DBG | About to run SSH command:
	I0319 19:23:35.354096   27348 main.go:141] libmachine: (ha-218762) DBG | exit 0
	I0319 19:23:35.480569   27348 main.go:141] libmachine: (ha-218762) DBG | SSH cmd err, output: <nil>: 
	I0319 19:23:35.480786   27348 main.go:141] libmachine: (ha-218762) KVM machine creation complete!
	I0319 19:23:35.481121   27348 main.go:141] libmachine: (ha-218762) Calling .GetConfigRaw
	I0319 19:23:35.481629   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:35.481808   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:35.481951   27348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:23:35.481966   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:23:35.483075   27348 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:23:35.483089   27348 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:23:35.483098   27348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:23:35.483105   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.485259   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.485608   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.485641   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.485697   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.485860   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.486014   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.486166   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.486335   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.486513   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.486527   27348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:23:35.595882   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:23:35.595908   27348 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:23:35.595929   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.598562   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.598951   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.598976   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.599134   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.599315   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.599449   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.599563   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.599758   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.599962   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.599978   27348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:23:35.709230   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:23:35.709293   27348 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:23:35.709308   27348 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:23:35.709318   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:35.709541   27348 buildroot.go:166] provisioning hostname "ha-218762"
	I0319 19:23:35.709568   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:35.709762   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.712302   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.712607   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.712635   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.712734   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.712899   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.713040   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.713195   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.713325   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.713552   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.713567   27348 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762 && echo "ha-218762" | sudo tee /etc/hostname
	I0319 19:23:35.835388   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:23:35.835411   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.838021   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.838452   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.838472   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.838641   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.838823   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.838988   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.839139   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.839313   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.839496   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.839524   27348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:23:35.959035   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:23:35.959065   27348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:23:35.959123   27348 buildroot.go:174] setting up certificates
	I0319 19:23:35.959146   27348 provision.go:84] configureAuth start
	I0319 19:23:35.959163   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:35.959451   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:35.961875   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.962224   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.962250   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.962397   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.965311   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.965668   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.965692   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.965867   27348 provision.go:143] copyHostCerts
	I0319 19:23:35.965900   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:23:35.965941   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:23:35.965954   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:23:35.966021   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:23:35.966112   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:23:35.966137   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:23:35.966146   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:23:35.966186   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:23:35.966240   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:23:35.966259   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:23:35.966267   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:23:35.966301   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:23:35.966357   27348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762 san=[127.0.0.1 192.168.39.200 ha-218762 localhost minikube]
	I0319 19:23:36.247556   27348 provision.go:177] copyRemoteCerts
	I0319 19:23:36.247606   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:23:36.247627   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.250153   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.250432   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.250451   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.250628   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.250787   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.250912   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.251054   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:36.334715   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:23:36.334794   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:23:36.361458   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:23:36.361528   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0319 19:23:36.387710   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:23:36.387765   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 19:23:36.413310   27348 provision.go:87] duration metric: took 454.152044ms to configureAuth
	I0319 19:23:36.413327   27348 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:23:36.413468   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:23:36.413529   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.416309   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.416650   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.416681   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.416830   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.416981   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.417140   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.417272   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.417443   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:36.417636   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:36.417652   27348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:23:36.697091   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:23:36.697123   27348 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:23:36.697139   27348 main.go:141] libmachine: (ha-218762) Calling .GetURL
	I0319 19:23:36.698601   27348 main.go:141] libmachine: (ha-218762) DBG | Using libvirt version 6000000
	I0319 19:23:36.700778   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.701118   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.701146   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.701320   27348 main.go:141] libmachine: Docker is up and running!
	I0319 19:23:36.701336   27348 main.go:141] libmachine: Reticulating splines...
	I0319 19:23:36.701342   27348 client.go:171] duration metric: took 23.039758114s to LocalClient.Create
	I0319 19:23:36.701361   27348 start.go:167] duration metric: took 23.039811148s to libmachine.API.Create "ha-218762"
	I0319 19:23:36.701370   27348 start.go:293] postStartSetup for "ha-218762" (driver="kvm2")
	I0319 19:23:36.701379   27348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:23:36.701393   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.701648   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:23:36.701675   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.703532   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.703828   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.703853   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.703974   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.704125   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.704296   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.704428   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:36.786952   27348 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:23:36.791724   27348 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:23:36.791745   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:23:36.791806   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:23:36.791910   27348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:23:36.791923   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:23:36.792043   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:23:36.801988   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:23:36.828000   27348 start.go:296] duration metric: took 126.618743ms for postStartSetup
	I0319 19:23:36.828039   27348 main.go:141] libmachine: (ha-218762) Calling .GetConfigRaw
	I0319 19:23:36.828557   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:36.830625   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.830958   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.830989   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.831153   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:23:36.831315   27348 start.go:128] duration metric: took 23.186893376s to createHost
	I0319 19:23:36.831335   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.833256   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.833565   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.833589   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.833711   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.833876   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.834027   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.834144   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.834321   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:36.834476   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:36.834495   27348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:23:36.945337   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876216.913641100
	
	I0319 19:23:36.945358   27348 fix.go:216] guest clock: 1710876216.913641100
	I0319 19:23:36.945373   27348 fix.go:229] Guest: 2024-03-19 19:23:36.9136411 +0000 UTC Remote: 2024-03-19 19:23:36.831326652 +0000 UTC m=+23.297982092 (delta=82.314448ms)
	I0319 19:23:36.945396   27348 fix.go:200] guest clock delta is within tolerance: 82.314448ms
	I0319 19:23:36.945403   27348 start.go:83] releasing machines lock for "ha-218762", held for 23.301056143s
	I0319 19:23:36.945423   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.945688   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:36.948216   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.948553   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.948588   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.948737   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.949237   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.949405   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.949503   27348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:23:36.949539   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.949626   27348 ssh_runner.go:195] Run: cat /version.json
	I0319 19:23:36.949648   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.951851   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952164   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952195   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.952230   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952335   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.952513   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.952671   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.952689   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952693   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.952820   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.952836   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:36.952955   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.953116   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.953252   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:37.053228   27348 ssh_runner.go:195] Run: systemctl --version
	I0319 19:23:37.059509   27348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:23:37.227644   27348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:23:37.234733   27348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:23:37.234793   27348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:23:37.253674   27348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:23:37.253689   27348 start.go:494] detecting cgroup driver to use...
	I0319 19:23:37.253745   27348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:23:37.271225   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:23:37.287126   27348 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:23:37.287166   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:23:37.302316   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:23:37.317370   27348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:23:37.445354   27348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:23:37.614479   27348 docker.go:233] disabling docker service ...
	I0319 19:23:37.614536   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:23:37.630422   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:23:37.644393   27348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:23:37.770883   27348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:23:37.884881   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:23:37.900070   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:23:37.920353   27348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:23:37.920417   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.931523   27348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:23:37.931575   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.942549   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.953522   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.964552   27348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:23:37.976425   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.987542   27348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:38.008218   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:38.019579   27348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:23:38.029874   27348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:23:38.029919   27348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:23:38.047948   27348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:23:38.062702   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:23:38.173044   27348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:23:38.313608   27348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:23:38.313666   27348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:23:38.319057   27348 start.go:562] Will wait 60s for crictl version
	I0319 19:23:38.319105   27348 ssh_runner.go:195] Run: which crictl
	I0319 19:23:38.323217   27348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:23:38.360505   27348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:23:38.360605   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:23:38.390311   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:23:38.425010   27348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:23:38.426364   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:38.428934   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:38.429286   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:38.429315   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:38.429518   27348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:23:38.434013   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:23:38.448099   27348 kubeadm.go:877] updating cluster {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:23:38.448203   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:23:38.448250   27348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:23:38.488018   27348 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 19:23:38.488086   27348 ssh_runner.go:195] Run: which lz4
	I0319 19:23:38.492522   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0319 19:23:38.492593   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 19:23:38.497145   27348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 19:23:38.497181   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 19:23:40.122811   27348 crio.go:462] duration metric: took 1.630235492s to copy over tarball
	I0319 19:23:40.122872   27348 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 19:23:42.749149   27348 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.626249337s)
	I0319 19:23:42.749175   27348 crio.go:469] duration metric: took 2.626342309s to extract the tarball
	I0319 19:23:42.749181   27348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 19:23:42.788753   27348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:23:42.838457   27348 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:23:42.838478   27348 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:23:42.838485   27348 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.29.3 crio true true} ...
	I0319 19:23:42.838575   27348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:23:42.838642   27348 ssh_runner.go:195] Run: crio config
	I0319 19:23:42.886617   27348 cni.go:84] Creating CNI manager for ""
	I0319 19:23:42.886637   27348 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0319 19:23:42.886648   27348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:23:42.886671   27348 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-218762 NodeName:ha-218762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:23:42.886785   27348 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-218762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:23:42.886807   27348 kube-vip.go:111] generating kube-vip config ...
	I0319 19:23:42.886844   27348 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:23:42.905208   27348 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:23:42.905340   27348 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:23:42.905394   27348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:23:42.917363   27348 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:23:42.917427   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0319 19:23:42.928684   27348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0319 19:23:42.947717   27348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:23:42.965642   27348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 19:23:42.983361   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0319 19:23:43.001617   27348 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:23:43.006169   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:23:43.020479   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:23:43.156133   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:23:43.174176   27348 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.200
	I0319 19:23:43.174200   27348 certs.go:194] generating shared ca certs ...
	I0319 19:23:43.174248   27348 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.174403   27348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:23:43.174455   27348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:23:43.174466   27348 certs.go:256] generating profile certs ...
	I0319 19:23:43.174531   27348 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:23:43.174549   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt with IP's: []
	I0319 19:23:43.392882   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt ...
	I0319 19:23:43.392911   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt: {Name:mka24831a144650fc12e99fb7602b05e3ab4357e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.393069   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key ...
	I0319 19:23:43.393080   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key: {Name:mk8697710c9481a12f7f2d4bccbf8fdb6ac58ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.393149   27348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea
	I0319 19:23:43.393164   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.254]
	I0319 19:23:43.497035   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea ...
	I0319 19:23:43.497065   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea: {Name:mk165b88fe7af465704e1426acd53551f0b36afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.497220   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea ...
	I0319 19:23:43.497232   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea: {Name:mkaf35a6353add309ad7e0286840c720e8749efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.497301   27348 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:23:43.497387   27348 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:23:43.497441   27348 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:23:43.497454   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt with IP's: []
	I0319 19:23:43.612821   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt ...
	I0319 19:23:43.612854   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt: {Name:mk5190a53f71d376643d1104d7bca70bdf2e0c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.613018   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key ...
	I0319 19:23:43.613030   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key: {Name:mk678608f54131146d1bb7d6f39b5961f53f5ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.613102   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:23:43.613121   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:23:43.613133   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:23:43.613149   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:23:43.613165   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:23:43.613183   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:23:43.613198   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:23:43.613213   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:23:43.613265   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:23:43.613307   27348 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:23:43.613319   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:23:43.613344   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:23:43.613368   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:23:43.613392   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:23:43.613440   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:23:43.613483   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.613508   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:43.613523   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:23:43.614065   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:23:43.649385   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:23:43.676987   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:23:43.705282   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:23:43.733582   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 19:23:43.763301   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 19:23:43.790974   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:23:43.818659   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:23:43.845762   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:23:43.872698   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:23:43.899478   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:23:43.925954   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:23:43.944608   27348 ssh_runner.go:195] Run: openssl version
	I0319 19:23:43.951180   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:23:43.966462   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.971605   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.971646   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.978480   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:23:43.992088   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:23:44.009985   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:44.015497   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:44.015556   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:44.024170   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:23:44.037957   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:23:44.058338   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:23:44.063411   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:23:44.063460   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:23:44.069749   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:23:44.081479   27348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:23:44.086535   27348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:23:44.086585   27348 kubeadm.go:391] StartCluster: {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:23:44.086654   27348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:23:44.086706   27348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:23:44.126804   27348 cri.go:89] found id: ""
	I0319 19:23:44.126869   27348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 19:23:44.137706   27348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 19:23:44.147876   27348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 19:23:44.157979   27348 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 19:23:44.157995   27348 kubeadm.go:156] found existing configuration files:
	
	I0319 19:23:44.158029   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 19:23:44.167338   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 19:23:44.167391   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 19:23:44.176915   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 19:23:44.186816   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 19:23:44.186875   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 19:23:44.196663   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 19:23:44.206057   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 19:23:44.206107   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 19:23:44.216565   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 19:23:44.226717   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 19:23:44.226785   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 19:23:44.237255   27348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 19:23:44.478822   27348 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 19:23:56.108471   27348 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 19:23:56.108541   27348 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 19:23:56.108634   27348 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 19:23:56.108761   27348 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 19:23:56.108891   27348 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 19:23:56.108973   27348 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 19:23:56.110617   27348 out.go:204]   - Generating certificates and keys ...
	I0319 19:23:56.110716   27348 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 19:23:56.110803   27348 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 19:23:56.110902   27348 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 19:23:56.110989   27348 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 19:23:56.111074   27348 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 19:23:56.111137   27348 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 19:23:56.111210   27348 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 19:23:56.111359   27348 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-218762 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0319 19:23:56.111431   27348 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 19:23:56.111573   27348 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-218762 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0319 19:23:56.111656   27348 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 19:23:56.111740   27348 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 19:23:56.111811   27348 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 19:23:56.111888   27348 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 19:23:56.111956   27348 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 19:23:56.112037   27348 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 19:23:56.112111   27348 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 19:23:56.112231   27348 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 19:23:56.112344   27348 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 19:23:56.112478   27348 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 19:23:56.112574   27348 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 19:23:56.114865   27348 out.go:204]   - Booting up control plane ...
	I0319 19:23:56.114981   27348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 19:23:56.115089   27348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 19:23:56.115153   27348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 19:23:56.115252   27348 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 19:23:56.115398   27348 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 19:23:56.115458   27348 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 19:23:56.115652   27348 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 19:23:56.115752   27348 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.579346 seconds
	I0319 19:23:56.115877   27348 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 19:23:56.116037   27348 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 19:23:56.116085   27348 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 19:23:56.116304   27348 kubeadm.go:309] [mark-control-plane] Marking the node ha-218762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 19:23:56.116356   27348 kubeadm.go:309] [bootstrap-token] Using token: jgwb7g.gi5mwlrvqlxl7rgc
	I0319 19:23:56.117594   27348 out.go:204]   - Configuring RBAC rules ...
	I0319 19:23:56.117673   27348 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 19:23:56.117738   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 19:23:56.117853   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 19:23:56.118002   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 19:23:56.118157   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 19:23:56.118228   27348 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 19:23:56.118314   27348 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 19:23:56.118374   27348 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 19:23:56.118436   27348 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 19:23:56.118455   27348 kubeadm.go:309] 
	I0319 19:23:56.118542   27348 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 19:23:56.118556   27348 kubeadm.go:309] 
	I0319 19:23:56.118684   27348 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 19:23:56.118703   27348 kubeadm.go:309] 
	I0319 19:23:56.118757   27348 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 19:23:56.118843   27348 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 19:23:56.118912   27348 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 19:23:56.118923   27348 kubeadm.go:309] 
	I0319 19:23:56.118977   27348 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 19:23:56.118983   27348 kubeadm.go:309] 
	I0319 19:23:56.119042   27348 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 19:23:56.119054   27348 kubeadm.go:309] 
	I0319 19:23:56.119129   27348 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 19:23:56.119227   27348 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 19:23:56.119318   27348 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 19:23:56.119327   27348 kubeadm.go:309] 
	I0319 19:23:56.119432   27348 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 19:23:56.119493   27348 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 19:23:56.119500   27348 kubeadm.go:309] 
	I0319 19:23:56.119561   27348 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jgwb7g.gi5mwlrvqlxl7rgc \
	I0319 19:23:56.119649   27348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 19:23:56.119668   27348 kubeadm.go:309] 	--control-plane 
	I0319 19:23:56.119671   27348 kubeadm.go:309] 
	I0319 19:23:56.119734   27348 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 19:23:56.119740   27348 kubeadm.go:309] 
	I0319 19:23:56.119804   27348 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jgwb7g.gi5mwlrvqlxl7rgc \
	I0319 19:23:56.119959   27348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 19:23:56.119979   27348 cni.go:84] Creating CNI manager for ""
	I0319 19:23:56.119987   27348 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0319 19:23:56.121697   27348 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0319 19:23:56.123144   27348 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0319 19:23:56.138925   27348 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0319 19:23:56.138951   27348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0319 19:23:56.168991   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0319 19:23:56.616133   27348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 19:23:56.616226   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:56.616249   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-218762 minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=ha-218762 minikube.k8s.io/primary=true
	I0319 19:23:56.641653   27348 ops.go:34] apiserver oom_adj: -16
	I0319 19:23:56.759415   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:57.260372   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:57.759932   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:58.259813   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:58.759613   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:59.259463   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:59.760053   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:00.259683   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:00.760461   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:01.259941   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:01.760134   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:02.260147   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:02.759579   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:03.259541   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:03.760078   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:04.259495   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:04.760078   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:05.259822   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:05.760357   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:06.259703   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:06.760335   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:07.259502   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:07.759633   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:08.260454   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:08.399653   27348 kubeadm.go:1107] duration metric: took 11.783495552s to wait for elevateKubeSystemPrivileges
	W0319 19:24:08.399689   27348 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 19:24:08.399698   27348 kubeadm.go:393] duration metric: took 24.313115746s to StartCluster
	I0319 19:24:08.399718   27348 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:08.399810   27348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:24:08.400404   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:08.400623   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 19:24:08.400636   27348 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 19:24:08.400691   27348 addons.go:69] Setting storage-provisioner=true in profile "ha-218762"
	I0319 19:24:08.400618   27348 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:24:08.400742   27348 start.go:240] waiting for startup goroutines ...
	I0319 19:24:08.400717   27348 addons.go:69] Setting default-storageclass=true in profile "ha-218762"
	I0319 19:24:08.400781   27348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-218762"
	I0319 19:24:08.400719   27348 addons.go:234] Setting addon storage-provisioner=true in "ha-218762"
	I0319 19:24:08.400895   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:24:08.400834   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:08.401197   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.401225   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.401277   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.401314   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.416354   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0319 19:24:08.416357   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0319 19:24:08.416840   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.416941   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.417371   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.417392   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.417461   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.417478   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.417697   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.417748   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.417935   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:08.418213   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.418242   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.420048   27348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:24:08.420312   27348 kapi.go:59] client config for ha-218762: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt", KeyFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key", CAFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0319 19:24:08.420727   27348 cert_rotation.go:137] Starting client certificate rotation controller
	I0319 19:24:08.420901   27348 addons.go:234] Setting addon default-storageclass=true in "ha-218762"
	I0319 19:24:08.420931   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:24:08.421207   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.421226   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.432480   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
	I0319 19:24:08.432904   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.433343   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.433361   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.433640   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.433846   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:08.435363   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:24:08.437784   27348 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 19:24:08.435580   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I0319 19:24:08.439290   27348 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:24:08.439307   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 19:24:08.439322   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:24:08.439682   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.440100   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.440119   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.440504   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.441057   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.441088   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.442176   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.442568   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:24:08.442590   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.442686   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:24:08.442838   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:24:08.442976   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:24:08.443089   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:24:08.455262   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0319 19:24:08.455551   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.455953   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.455975   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.456265   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.456441   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:08.457974   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:24:08.458173   27348 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 19:24:08.458185   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 19:24:08.458197   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:24:08.460497   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.460831   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:24:08.460847   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.461038   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:24:08.461202   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:24:08.461334   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:24:08.461485   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:24:08.496913   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 19:24:08.578292   27348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:24:08.602955   27348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 19:24:08.796234   27348 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0319 19:24:09.093831   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.093861   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.093912   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.093922   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.094169   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.094172   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094192   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.094200   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094181   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094286   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094301   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.094309   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.094210   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.094334   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.094603   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094616   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094646   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.094676   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094684   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094785   27348 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0319 19:24:09.094794   27348 round_trippers.go:469] Request Headers:
	I0319 19:24:09.094804   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:24:09.094809   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:24:09.104238   27348 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0319 19:24:09.104753   27348 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0319 19:24:09.104767   27348 round_trippers.go:469] Request Headers:
	I0319 19:24:09.104777   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:24:09.104786   27348 round_trippers.go:473]     Content-Type: application/json
	I0319 19:24:09.104793   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:24:09.107595   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:24:09.107708   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.107718   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.107926   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.107942   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.107959   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.109736   27348 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0319 19:24:09.110904   27348 addons.go:505] duration metric: took 710.266074ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0319 19:24:09.110939   27348 start.go:245] waiting for cluster config update ...
	I0319 19:24:09.110955   27348 start.go:254] writing updated cluster config ...
	I0319 19:24:09.112685   27348 out.go:177] 
	I0319 19:24:09.114089   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:09.114173   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:24:09.115868   27348 out.go:177] * Starting "ha-218762-m02" control-plane node in "ha-218762" cluster
	I0319 19:24:09.116878   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:24:09.116898   27348 cache.go:56] Caching tarball of preloaded images
	I0319 19:24:09.116979   27348 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:24:09.116992   27348 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:24:09.117068   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:24:09.117228   27348 start.go:360] acquireMachinesLock for ha-218762-m02: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:24:09.117276   27348 start.go:364] duration metric: took 30.229µs to acquireMachinesLock for "ha-218762-m02"
	I0319 19:24:09.117302   27348 start.go:93] Provisioning new machine with config: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:24:09.117388   27348 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0319 19:24:09.118566   27348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 19:24:09.118642   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:09.118669   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:09.132381   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0319 19:24:09.132753   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:09.133161   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:09.133176   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:09.133472   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:09.133684   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:09.133841   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:09.133986   27348 start.go:159] libmachine.API.Create for "ha-218762" (driver="kvm2")
	I0319 19:24:09.134009   27348 client.go:168] LocalClient.Create starting
	I0319 19:24:09.134038   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:24:09.134071   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:24:09.134088   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:24:09.134151   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:24:09.134179   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:24:09.134197   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:24:09.134219   27348 main.go:141] libmachine: Running pre-create checks...
	I0319 19:24:09.134231   27348 main.go:141] libmachine: (ha-218762-m02) Calling .PreCreateCheck
	I0319 19:24:09.134383   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetConfigRaw
	I0319 19:24:09.134739   27348 main.go:141] libmachine: Creating machine...
	I0319 19:24:09.134752   27348 main.go:141] libmachine: (ha-218762-m02) Calling .Create
	I0319 19:24:09.134874   27348 main.go:141] libmachine: (ha-218762-m02) Creating KVM machine...
	I0319 19:24:09.136007   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found existing default KVM network
	I0319 19:24:09.136161   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found existing private KVM network mk-ha-218762
	I0319 19:24:09.136364   27348 main.go:141] libmachine: (ha-218762-m02) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02 ...
	I0319 19:24:09.136390   27348 main.go:141] libmachine: (ha-218762-m02) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:24:09.136439   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.136339   27723 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:24:09.136514   27348 main.go:141] libmachine: (ha-218762-m02) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:24:09.352009   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.351882   27723 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa...
	I0319 19:24:09.449610   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.449508   27723 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/ha-218762-m02.rawdisk...
	I0319 19:24:09.449645   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Writing magic tar header
	I0319 19:24:09.449659   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Writing SSH key tar header
	I0319 19:24:09.449670   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.449615   27723 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02 ...
	I0319 19:24:09.449727   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02
	I0319 19:24:09.449758   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:24:09.449782   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:24:09.449801   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02 (perms=drwx------)
	I0319 19:24:09.449816   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:24:09.449831   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:24:09.449850   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:24:09.449869   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:24:09.449883   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:24:09.449906   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:24:09.449917   27348 main.go:141] libmachine: (ha-218762-m02) Creating domain...
	I0319 19:24:09.449931   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:24:09.449944   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:24:09.449972   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home
	I0319 19:24:09.449996   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Skipping /home - not owner
	I0319 19:24:09.450793   27348 main.go:141] libmachine: (ha-218762-m02) define libvirt domain using xml: 
	I0319 19:24:09.450815   27348 main.go:141] libmachine: (ha-218762-m02) <domain type='kvm'>
	I0319 19:24:09.450825   27348 main.go:141] libmachine: (ha-218762-m02)   <name>ha-218762-m02</name>
	I0319 19:24:09.450834   27348 main.go:141] libmachine: (ha-218762-m02)   <memory unit='MiB'>2200</memory>
	I0319 19:24:09.450846   27348 main.go:141] libmachine: (ha-218762-m02)   <vcpu>2</vcpu>
	I0319 19:24:09.450854   27348 main.go:141] libmachine: (ha-218762-m02)   <features>
	I0319 19:24:09.450860   27348 main.go:141] libmachine: (ha-218762-m02)     <acpi/>
	I0319 19:24:09.450867   27348 main.go:141] libmachine: (ha-218762-m02)     <apic/>
	I0319 19:24:09.450873   27348 main.go:141] libmachine: (ha-218762-m02)     <pae/>
	I0319 19:24:09.450884   27348 main.go:141] libmachine: (ha-218762-m02)     
	I0319 19:24:09.450892   27348 main.go:141] libmachine: (ha-218762-m02)   </features>
	I0319 19:24:09.450897   27348 main.go:141] libmachine: (ha-218762-m02)   <cpu mode='host-passthrough'>
	I0319 19:24:09.450904   27348 main.go:141] libmachine: (ha-218762-m02)   
	I0319 19:24:09.450909   27348 main.go:141] libmachine: (ha-218762-m02)   </cpu>
	I0319 19:24:09.450919   27348 main.go:141] libmachine: (ha-218762-m02)   <os>
	I0319 19:24:09.450924   27348 main.go:141] libmachine: (ha-218762-m02)     <type>hvm</type>
	I0319 19:24:09.450930   27348 main.go:141] libmachine: (ha-218762-m02)     <boot dev='cdrom'/>
	I0319 19:24:09.450937   27348 main.go:141] libmachine: (ha-218762-m02)     <boot dev='hd'/>
	I0319 19:24:09.450944   27348 main.go:141] libmachine: (ha-218762-m02)     <bootmenu enable='no'/>
	I0319 19:24:09.450950   27348 main.go:141] libmachine: (ha-218762-m02)   </os>
	I0319 19:24:09.450956   27348 main.go:141] libmachine: (ha-218762-m02)   <devices>
	I0319 19:24:09.450963   27348 main.go:141] libmachine: (ha-218762-m02)     <disk type='file' device='cdrom'>
	I0319 19:24:09.450972   27348 main.go:141] libmachine: (ha-218762-m02)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/boot2docker.iso'/>
	I0319 19:24:09.450979   27348 main.go:141] libmachine: (ha-218762-m02)       <target dev='hdc' bus='scsi'/>
	I0319 19:24:09.450986   27348 main.go:141] libmachine: (ha-218762-m02)       <readonly/>
	I0319 19:24:09.450994   27348 main.go:141] libmachine: (ha-218762-m02)     </disk>
	I0319 19:24:09.450999   27348 main.go:141] libmachine: (ha-218762-m02)     <disk type='file' device='disk'>
	I0319 19:24:09.451008   27348 main.go:141] libmachine: (ha-218762-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:24:09.451016   27348 main.go:141] libmachine: (ha-218762-m02)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/ha-218762-m02.rawdisk'/>
	I0319 19:24:09.451024   27348 main.go:141] libmachine: (ha-218762-m02)       <target dev='hda' bus='virtio'/>
	I0319 19:24:09.451030   27348 main.go:141] libmachine: (ha-218762-m02)     </disk>
	I0319 19:24:09.451034   27348 main.go:141] libmachine: (ha-218762-m02)     <interface type='network'>
	I0319 19:24:09.451043   27348 main.go:141] libmachine: (ha-218762-m02)       <source network='mk-ha-218762'/>
	I0319 19:24:09.451047   27348 main.go:141] libmachine: (ha-218762-m02)       <model type='virtio'/>
	I0319 19:24:09.451054   27348 main.go:141] libmachine: (ha-218762-m02)     </interface>
	I0319 19:24:09.451059   27348 main.go:141] libmachine: (ha-218762-m02)     <interface type='network'>
	I0319 19:24:09.451066   27348 main.go:141] libmachine: (ha-218762-m02)       <source network='default'/>
	I0319 19:24:09.451073   27348 main.go:141] libmachine: (ha-218762-m02)       <model type='virtio'/>
	I0319 19:24:09.451078   27348 main.go:141] libmachine: (ha-218762-m02)     </interface>
	I0319 19:24:09.451085   27348 main.go:141] libmachine: (ha-218762-m02)     <serial type='pty'>
	I0319 19:24:09.451090   27348 main.go:141] libmachine: (ha-218762-m02)       <target port='0'/>
	I0319 19:24:09.451097   27348 main.go:141] libmachine: (ha-218762-m02)     </serial>
	I0319 19:24:09.451101   27348 main.go:141] libmachine: (ha-218762-m02)     <console type='pty'>
	I0319 19:24:09.451106   27348 main.go:141] libmachine: (ha-218762-m02)       <target type='serial' port='0'/>
	I0319 19:24:09.451113   27348 main.go:141] libmachine: (ha-218762-m02)     </console>
	I0319 19:24:09.451118   27348 main.go:141] libmachine: (ha-218762-m02)     <rng model='virtio'>
	I0319 19:24:09.451123   27348 main.go:141] libmachine: (ha-218762-m02)       <backend model='random'>/dev/random</backend>
	I0319 19:24:09.451127   27348 main.go:141] libmachine: (ha-218762-m02)     </rng>
	I0319 19:24:09.451134   27348 main.go:141] libmachine: (ha-218762-m02)     
	I0319 19:24:09.451138   27348 main.go:141] libmachine: (ha-218762-m02)     
	I0319 19:24:09.451143   27348 main.go:141] libmachine: (ha-218762-m02)   </devices>
	I0319 19:24:09.451149   27348 main.go:141] libmachine: (ha-218762-m02) </domain>
	I0319 19:24:09.451155   27348 main.go:141] libmachine: (ha-218762-m02) 
	I0319 19:24:09.457818   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:d5:02:d3 in network default
	I0319 19:24:09.458321   27348 main.go:141] libmachine: (ha-218762-m02) Ensuring networks are active...
	I0319 19:24:09.458344   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:09.459008   27348 main.go:141] libmachine: (ha-218762-m02) Ensuring network default is active
	I0319 19:24:09.459273   27348 main.go:141] libmachine: (ha-218762-m02) Ensuring network mk-ha-218762 is active
	I0319 19:24:09.459575   27348 main.go:141] libmachine: (ha-218762-m02) Getting domain xml...
	I0319 19:24:09.460239   27348 main.go:141] libmachine: (ha-218762-m02) Creating domain...
	I0319 19:24:10.688079   27348 main.go:141] libmachine: (ha-218762-m02) Waiting to get IP...
	I0319 19:24:10.688985   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:10.689439   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:10.689466   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:10.689400   27723 retry.go:31] will retry after 241.907067ms: waiting for machine to come up
	I0319 19:24:10.932878   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:10.933368   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:10.933399   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:10.933310   27723 retry.go:31] will retry after 360.492289ms: waiting for machine to come up
	I0319 19:24:11.295858   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:11.296334   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:11.296356   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:11.296298   27723 retry.go:31] will retry after 348.561104ms: waiting for machine to come up
	I0319 19:24:11.646768   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:11.647236   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:11.647260   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:11.647200   27723 retry.go:31] will retry after 572.33675ms: waiting for machine to come up
	I0319 19:24:12.220627   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:12.221063   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:12.221087   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:12.221023   27723 retry.go:31] will retry after 640.071922ms: waiting for machine to come up
	I0319 19:24:12.862498   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:12.862911   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:12.862943   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:12.862871   27723 retry.go:31] will retry after 937.280979ms: waiting for machine to come up
	I0319 19:24:13.801386   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:13.801793   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:13.801823   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:13.801740   27723 retry.go:31] will retry after 1.122005935s: waiting for machine to come up
	I0319 19:24:14.925675   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:14.926034   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:14.926054   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:14.925998   27723 retry.go:31] will retry after 1.034147281s: waiting for machine to come up
	I0319 19:24:15.962135   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:15.962501   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:15.962542   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:15.962483   27723 retry.go:31] will retry after 1.788451935s: waiting for machine to come up
	I0319 19:24:17.753255   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:17.753608   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:17.753631   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:17.753563   27723 retry.go:31] will retry after 1.438912642s: waiting for machine to come up
	I0319 19:24:19.193815   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:19.194226   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:19.194259   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:19.194168   27723 retry.go:31] will retry after 2.023000789s: waiting for machine to come up
	I0319 19:24:21.219365   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:21.219772   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:21.219794   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:21.219734   27723 retry.go:31] will retry after 2.388284325s: waiting for machine to come up
	I0319 19:24:23.611079   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:23.611472   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:23.611500   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:23.611426   27723 retry.go:31] will retry after 3.691797958s: waiting for machine to come up
	I0319 19:24:27.306012   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:27.306427   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:27.306468   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:27.306376   27723 retry.go:31] will retry after 4.354456279s: waiting for machine to come up
	I0319 19:24:31.663824   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:31.664180   27348 main.go:141] libmachine: (ha-218762-m02) Found IP for machine: 192.168.39.234
	I0319 19:24:31.664208   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has current primary IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:31.664219   27348 main.go:141] libmachine: (ha-218762-m02) Reserving static IP address...
	I0319 19:24:31.664515   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find host DHCP lease matching {name: "ha-218762-m02", mac: "52:54:00:ab:0e:bd", ip: "192.168.39.234"} in network mk-ha-218762
	I0319 19:24:31.735183   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Getting to WaitForSSH function...
	I0319 19:24:31.735226   27348 main.go:141] libmachine: (ha-218762-m02) Reserved static IP address: 192.168.39.234
	I0319 19:24:31.735239   27348 main.go:141] libmachine: (ha-218762-m02) Waiting for SSH to be available...
	I0319 19:24:31.737749   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:31.738159   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762
	I0319 19:24:31.738184   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find defined IP address of network mk-ha-218762 interface with MAC address 52:54:00:ab:0e:bd
	I0319 19:24:31.738313   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH client type: external
	I0319 19:24:31.738332   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa (-rw-------)
	I0319 19:24:31.738360   27348 main.go:141] libmachine: (ha-218762-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:24:31.738367   27348 main.go:141] libmachine: (ha-218762-m02) DBG | About to run SSH command:
	I0319 19:24:31.738383   27348 main.go:141] libmachine: (ha-218762-m02) DBG | exit 0
	I0319 19:24:31.742259   27348 main.go:141] libmachine: (ha-218762-m02) DBG | SSH cmd err, output: exit status 255: 
	I0319 19:24:31.742283   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0319 19:24:31.742292   27348 main.go:141] libmachine: (ha-218762-m02) DBG | command : exit 0
	I0319 19:24:31.742304   27348 main.go:141] libmachine: (ha-218762-m02) DBG | err     : exit status 255
	I0319 19:24:31.742320   27348 main.go:141] libmachine: (ha-218762-m02) DBG | output  : 
	I0319 19:24:34.743284   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Getting to WaitForSSH function...
	I0319 19:24:34.745760   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.746143   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:34.746173   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.746355   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH client type: external
	I0319 19:24:34.746378   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa (-rw-------)
	I0319 19:24:34.746410   27348 main.go:141] libmachine: (ha-218762-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:24:34.746431   27348 main.go:141] libmachine: (ha-218762-m02) DBG | About to run SSH command:
	I0319 19:24:34.746451   27348 main.go:141] libmachine: (ha-218762-m02) DBG | exit 0
	I0319 19:24:34.868360   27348 main.go:141] libmachine: (ha-218762-m02) DBG | SSH cmd err, output: <nil>: 
	I0319 19:24:34.868642   27348 main.go:141] libmachine: (ha-218762-m02) KVM machine creation complete!
	I0319 19:24:34.868931   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetConfigRaw
	I0319 19:24:34.869449   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:34.869636   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:34.869818   27348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:24:34.869835   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:24:34.871080   27348 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:24:34.871093   27348 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:24:34.871098   27348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:24:34.871104   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:34.873305   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.873679   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:34.873708   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.873811   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:34.873984   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.874145   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.874303   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:34.874444   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:34.874628   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:34.874638   27348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:24:34.975552   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:24:34.975578   27348 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:24:34.975588   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:34.978298   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.978624   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:34.978649   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.978791   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:34.978979   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.979146   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.979280   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:34.979441   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:34.979593   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:34.979604   27348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:24:35.081422   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:24:35.081507   27348 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:24:35.081521   27348 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:24:35.081529   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:35.081784   27348 buildroot.go:166] provisioning hostname "ha-218762-m02"
	I0319 19:24:35.081805   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:35.082002   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.085929   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.086422   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.086493   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.086591   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.086804   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.087084   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.087286   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.087452   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:35.087605   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:35.087618   27348 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762-m02 && echo "ha-218762-m02" | sudo tee /etc/hostname
	I0319 19:24:35.204814   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762-m02
	
	I0319 19:24:35.204854   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.207405   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.207750   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.207778   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.207929   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.208117   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.208302   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.208466   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.208629   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:35.208784   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:35.208799   27348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:24:35.319044   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:24:35.319080   27348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:24:35.319100   27348 buildroot.go:174] setting up certificates
	I0319 19:24:35.319110   27348 provision.go:84] configureAuth start
	I0319 19:24:35.319123   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:35.319386   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:35.322159   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.322499   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.322528   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.322782   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.325377   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.325671   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.325697   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.325919   27348 provision.go:143] copyHostCerts
	I0319 19:24:35.325949   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:24:35.325978   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:24:35.325986   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:24:35.326048   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:24:35.326117   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:24:35.326133   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:24:35.326141   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:24:35.326162   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:24:35.326249   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:24:35.326270   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:24:35.326275   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:24:35.326309   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:24:35.326363   27348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762-m02 san=[127.0.0.1 192.168.39.234 ha-218762-m02 localhost minikube]
	I0319 19:24:35.537474   27348 provision.go:177] copyRemoteCerts
	I0319 19:24:35.537524   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:24:35.537546   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.540273   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.540583   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.540614   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.540773   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.540939   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.541086   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.541231   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:35.623138   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:24:35.623207   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0319 19:24:35.651170   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:24:35.651233   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 19:24:35.678948   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:24:35.679015   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:24:35.707837   27348 provision.go:87] duration metric: took 388.716414ms to configureAuth
	I0319 19:24:35.707870   27348 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:24:35.708019   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:35.708081   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.710589   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.710936   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.710974   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.711119   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.711290   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.711444   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.711609   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.711775   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:35.711935   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:35.711949   27348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:24:35.983586   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:24:35.983613   27348 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:24:35.983623   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetURL
	I0319 19:24:35.984709   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using libvirt version 6000000
	I0319 19:24:35.986477   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.986809   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.986830   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.987029   27348 main.go:141] libmachine: Docker is up and running!
	I0319 19:24:35.987045   27348 main.go:141] libmachine: Reticulating splines...
	I0319 19:24:35.987054   27348 client.go:171] duration metric: took 26.853037909s to LocalClient.Create
	I0319 19:24:35.987084   27348 start.go:167] duration metric: took 26.853098495s to libmachine.API.Create "ha-218762"
	I0319 19:24:35.987097   27348 start.go:293] postStartSetup for "ha-218762-m02" (driver="kvm2")
	I0319 19:24:35.987111   27348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:24:35.987128   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:35.987331   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:24:35.987353   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.989430   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.989742   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.989772   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.989894   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.990105   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.990262   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.990380   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:36.073266   27348 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:24:36.078444   27348 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:24:36.078470   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:24:36.078538   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:24:36.078623   27348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:24:36.078633   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:24:36.078704   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:24:36.089599   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:24:36.116537   27348 start.go:296] duration metric: took 129.427413ms for postStartSetup
	I0319 19:24:36.116576   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetConfigRaw
	I0319 19:24:36.117171   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:36.119370   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.119641   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.119660   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.119921   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:24:36.120134   27348 start.go:128] duration metric: took 27.002733312s to createHost
	I0319 19:24:36.120160   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:36.122149   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.122569   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.122595   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.122717   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:36.122848   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.122983   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.123089   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:36.123216   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:36.123372   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:36.123383   27348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:24:36.225736   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876276.199476383
	
	I0319 19:24:36.225762   27348 fix.go:216] guest clock: 1710876276.199476383
	I0319 19:24:36.225769   27348 fix.go:229] Guest: 2024-03-19 19:24:36.199476383 +0000 UTC Remote: 2024-03-19 19:24:36.120147227 +0000 UTC m=+82.586802676 (delta=79.329156ms)
	I0319 19:24:36.225782   27348 fix.go:200] guest clock delta is within tolerance: 79.329156ms
	I0319 19:24:36.225787   27348 start.go:83] releasing machines lock for "ha-218762-m02", held for 27.10849928s
	I0319 19:24:36.225805   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.226085   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:36.228565   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.228943   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.228973   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.231315   27348 out.go:177] * Found network options:
	I0319 19:24:36.232788   27348 out.go:177]   - NO_PROXY=192.168.39.200
	W0319 19:24:36.234124   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:24:36.234156   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.234633   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.234824   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.234905   27348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:24:36.234942   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	W0319 19:24:36.235027   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:24:36.235093   27348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:24:36.235114   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:36.237426   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.237744   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.237815   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.237849   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.237999   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:36.238060   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.238091   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.238193   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.238378   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:36.238387   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:36.238551   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.238547   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:36.238692   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:36.238816   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:36.472989   27348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:24:36.480468   27348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:24:36.480541   27348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:24:36.498734   27348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:24:36.498757   27348 start.go:494] detecting cgroup driver to use...
	I0319 19:24:36.498822   27348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:24:36.520118   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:24:36.538310   27348 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:24:36.538360   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:24:36.556254   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:24:36.573969   27348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:24:36.703237   27348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:24:36.840282   27348 docker.go:233] disabling docker service ...
	I0319 19:24:36.840349   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:24:36.857338   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:24:36.871851   27348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:24:37.007320   27348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:24:37.148570   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:24:37.174055   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:24:37.194852   27348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:24:37.194918   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.207083   27348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:24:37.207137   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.218504   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.229423   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.240393   27348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:24:37.252212   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.263942   27348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.283851   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.295634   27348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:24:37.305608   27348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:24:37.305660   27348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:24:37.319719   27348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:24:37.329851   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:24:37.463372   27348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:24:37.621609   27348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:24:37.621672   27348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:24:37.627757   27348 start.go:562] Will wait 60s for crictl version
	I0319 19:24:37.627813   27348 ssh_runner.go:195] Run: which crictl
	I0319 19:24:37.632007   27348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:24:37.670327   27348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:24:37.670388   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:24:37.704916   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:24:37.736656   27348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:24:37.738080   27348 out.go:177]   - env NO_PROXY=192.168.39.200
	I0319 19:24:37.739409   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:37.742006   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:37.742358   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:37.742384   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:37.742616   27348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:24:37.747089   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:24:37.761455   27348 mustload.go:65] Loading cluster: ha-218762
	I0319 19:24:37.761674   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:37.761928   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:37.761952   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:37.776184   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0319 19:24:37.776575   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:37.777040   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:37.777065   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:37.777436   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:37.777649   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:37.779012   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:24:37.779275   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:37.779299   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:37.792981   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0319 19:24:37.793405   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:37.793840   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:37.793860   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:37.794135   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:37.794317   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:24:37.794474   27348 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.234
	I0319 19:24:37.794486   27348 certs.go:194] generating shared ca certs ...
	I0319 19:24:37.794504   27348 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:37.794633   27348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:24:37.794684   27348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:24:37.794698   27348 certs.go:256] generating profile certs ...
	I0319 19:24:37.794778   27348 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:24:37.794808   27348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190
	I0319 19:24:37.794829   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.254]
	I0319 19:24:38.041687   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190 ...
	I0319 19:24:38.041715   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190: {Name:mkdc5aa372770cfba177067290e99c812165411e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:38.041896   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190 ...
	I0319 19:24:38.041914   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190: {Name:mkb3673a763c650724d08648f73a648066a45f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:38.042006   27348 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:24:38.042147   27348 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:24:38.042302   27348 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:24:38.042319   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:24:38.042336   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:24:38.042358   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:24:38.042378   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:24:38.042396   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:24:38.042411   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:24:38.042429   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:24:38.042447   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:24:38.042518   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:24:38.042557   27348 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:24:38.042571   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:24:38.042609   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:24:38.042646   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:24:38.042676   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:24:38.042727   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:24:38.042767   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.042793   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.042811   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.042849   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:24:38.045841   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:38.046258   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:24:38.046283   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:38.046435   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:24:38.046614   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:24:38.046756   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:24:38.046863   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:24:38.124603   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0319 19:24:38.129963   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0319 19:24:38.145656   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0319 19:24:38.151704   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0319 19:24:38.163237   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0319 19:24:38.168079   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0319 19:24:38.179223   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0319 19:24:38.183958   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0319 19:24:38.195603   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0319 19:24:38.200209   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0319 19:24:38.212130   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0319 19:24:38.216803   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0319 19:24:38.228914   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:24:38.257457   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:24:38.283417   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:24:38.308971   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:24:38.334720   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0319 19:24:38.360044   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 19:24:38.385284   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:24:38.410479   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:24:38.437042   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:24:38.463073   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:24:38.489289   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:24:38.514924   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0319 19:24:38.533209   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0319 19:24:38.551406   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0319 19:24:38.569691   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0319 19:24:38.588065   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0319 19:24:38.606205   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0319 19:24:38.625137   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0319 19:24:38.643070   27348 ssh_runner.go:195] Run: openssl version
	I0319 19:24:38.649680   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:24:38.661801   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.666659   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.666698   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.672672   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:24:38.684564   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:24:38.697224   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.702172   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.702217   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.709835   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:24:38.723062   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:24:38.735260   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.740020   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.740061   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.746062   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:24:38.758243   27348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:24:38.762688   27348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:24:38.762732   27348 kubeadm.go:928] updating node {m02 192.168.39.234 8443 v1.29.3 crio true true} ...
	I0319 19:24:38.762800   27348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:24:38.762824   27348 kube-vip.go:111] generating kube-vip config ...
	I0319 19:24:38.762851   27348 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:24:38.781545   27348 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:24:38.781613   27348 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:24:38.781664   27348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:24:38.792816   27348 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0319 19:24:38.792873   27348 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0319 19:24:38.803939   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0319 19:24:38.803957   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:24:38.804021   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:24:38.804095   27348 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0319 19:24:38.804125   27348 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0319 19:24:38.809979   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0319 19:24:38.810004   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0319 19:24:40.428665   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:24:40.444708   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:24:40.444821   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:24:40.449895   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0319 19:24:40.449924   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0319 19:25:09.544171   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:25:09.544299   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:25:09.550148   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0319 19:25:09.550189   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0319 19:25:09.800736   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0319 19:25:09.811056   27348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0319 19:25:09.829439   27348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:25:09.847650   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:25:09.866626   27348 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:25:09.871327   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:25:09.885218   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:10.015756   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:25:10.037115   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:25:10.037448   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:25:10.037480   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:25:10.051743   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0319 19:25:10.052142   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:25:10.052642   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:25:10.052666   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:25:10.052995   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:25:10.053205   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:25:10.053369   27348 start.go:316] joinCluster: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:25:10.053452   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0319 19:25:10.053471   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:25:10.056164   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:25:10.056571   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:25:10.056596   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:25:10.056765   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:25:10.056932   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:25:10.057104   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:25:10.057254   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:25:10.233803   27348 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:10.233856   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka8sqy.qmlnlfdjfipv0qxg --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m02 --control-plane --apiserver-advertise-address=192.168.39.234 --apiserver-bind-port=8443"
	I0319 19:25:33.787794   27348 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka8sqy.qmlnlfdjfipv0qxg --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m02 --control-plane --apiserver-advertise-address=192.168.39.234 --apiserver-bind-port=8443": (23.553916446s)
	I0319 19:25:33.787824   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0319 19:25:34.498082   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-218762-m02 minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=ha-218762 minikube.k8s.io/primary=false
	I0319 19:25:34.657656   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-218762-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0319 19:25:34.797010   27348 start.go:318] duration metric: took 24.743639757s to joinCluster
	I0319 19:25:34.797098   27348 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:34.798969   27348 out.go:177] * Verifying Kubernetes components...
	I0319 19:25:34.797418   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:25:34.800398   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:35.092809   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:25:35.161141   27348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:25:35.161357   27348 kapi.go:59] client config for ha-218762: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt", KeyFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key", CAFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0319 19:25:35.161410   27348 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.200:8443
	I0319 19:25:35.161625   27348 node_ready.go:35] waiting up to 6m0s for node "ha-218762-m02" to be "Ready" ...
	I0319 19:25:35.161695   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:35.161702   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:35.161709   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:35.161712   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:35.182893   27348 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0319 19:25:35.662804   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:35.662830   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:35.662842   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:35.662847   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:35.667671   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:36.162832   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:36.162852   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:36.162861   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:36.162866   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:36.167304   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:36.662466   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:36.662489   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:36.662496   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:36.662500   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:36.671800   27348 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0319 19:25:37.161835   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:37.161865   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:37.161876   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:37.161882   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:37.165542   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:37.166376   27348 node_ready.go:53] node "ha-218762-m02" has status "Ready":"False"
	I0319 19:25:37.662711   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:37.662731   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:37.662739   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:37.662743   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:37.669135   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:25:38.162109   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:38.162128   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:38.162135   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:38.162140   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:38.165412   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:38.662046   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:38.662070   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:38.662081   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:38.662088   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:38.666467   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:39.162583   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:39.162600   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:39.162608   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:39.162613   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:39.166383   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:39.167473   27348 node_ready.go:53] node "ha-218762-m02" has status "Ready":"False"
	I0319 19:25:39.661854   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:39.661879   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:39.661889   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:39.661894   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:39.665709   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:40.162423   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:40.162447   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:40.162457   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:40.162463   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:40.166419   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:40.662769   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:40.662795   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:40.662806   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:40.662810   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:40.667756   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:41.161947   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:41.161971   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:41.162001   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:41.162006   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:41.165402   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:41.662723   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:41.662747   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:41.662760   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:41.662766   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:41.665848   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:41.666654   27348 node_ready.go:53] node "ha-218762-m02" has status "Ready":"False"
	I0319 19:25:42.161999   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:42.162017   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:42.162025   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:42.162029   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:42.165927   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:42.662122   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:42.662141   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:42.662152   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:42.662157   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:42.669266   27348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0319 19:25:43.162279   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:43.162298   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.162306   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.162310   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.166625   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:43.167248   27348 node_ready.go:49] node "ha-218762-m02" has status "Ready":"True"
	I0319 19:25:43.167267   27348 node_ready.go:38] duration metric: took 8.005626429s for node "ha-218762-m02" to be "Ready" ...
	I0319 19:25:43.167276   27348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:25:43.167336   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:43.167347   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.167354   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.167358   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.171740   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:43.178089   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.178162   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-6f64w
	I0319 19:25:43.178177   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.178188   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.178194   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.181350   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.182042   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:43.182061   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.182070   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.182075   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.189035   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:25:43.189521   27348 pod_ready.go:92] pod "coredns-76f75df574-6f64w" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:43.189536   27348 pod_ready.go:81] duration metric: took 11.427462ms for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.189545   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.189588   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zlz9l
	I0319 19:25:43.189598   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.189604   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.189609   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.192695   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.194082   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:43.194095   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.194102   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.194105   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.196683   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.197232   27348 pod_ready.go:92] pod "coredns-76f75df574-zlz9l" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:43.197245   27348 pod_ready.go:81] duration metric: took 7.694962ms for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.197256   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.197308   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762
	I0319 19:25:43.197319   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.197328   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.197337   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.200293   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.201025   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:43.201041   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.201049   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.201055   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.203397   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.203942   27348 pod_ready.go:92] pod "etcd-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:43.203958   27348 pod_ready.go:81] duration metric: took 6.695486ms for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.203969   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.204057   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:43.204071   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.204080   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.204085   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.207034   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.207553   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:43.207567   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.207574   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.207577   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.210905   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.704715   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:43.704734   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.704741   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.704745   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.707950   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.708724   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:43.708742   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.708749   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.708753   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.711524   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:44.204920   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:44.204940   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.204948   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.204952   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.208483   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:44.209381   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:44.209397   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.209408   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.209415   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.212193   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:44.704194   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:44.704221   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.704236   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.704243   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.708644   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:44.709827   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:44.709846   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.709856   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.709862   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.715011   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:25:45.204432   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:45.204454   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.204462   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.204467   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.208183   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:45.209578   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:45.209597   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.209608   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.209612   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.212840   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:45.213404   27348 pod_ready.go:102] pod "etcd-ha-218762-m02" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:45.704850   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:45.704877   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.704886   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.704893   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.708354   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:45.709104   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:45.709117   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.709124   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.709130   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.711803   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.204411   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:46.204432   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.204440   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.204444   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.207897   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:46.208700   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:46.208723   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.208733   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.208738   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.211254   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.704303   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:46.704327   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.704336   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.704340   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.707594   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:46.708528   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:46.708544   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.708551   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.708555   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.711277   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.711792   27348 pod_ready.go:92] pod "etcd-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:46.711812   27348 pod_ready.go:81] duration metric: took 3.507834904s for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.711830   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.711885   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762
	I0319 19:25:46.711896   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.711905   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.711913   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.714566   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.715234   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:46.715247   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.715254   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.715257   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.717661   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.718248   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:46.718262   27348 pod_ready.go:81] duration metric: took 6.423275ms for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.718270   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.718309   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m02
	I0319 19:25:46.718318   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.718324   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.718328   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.721146   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.763034   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:46.763059   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.763066   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.763070   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.766397   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:46.767053   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:46.767070   27348 pod_ready.go:81] duration metric: took 48.795064ms for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.767080   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.962439   27348 request.go:629] Waited for 195.287831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:25:46.962500   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:25:46.962507   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.962519   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.962528   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.966054   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.163299   27348 request.go:629] Waited for 196.368877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:47.163347   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:47.163353   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.163360   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.163371   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.166980   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.167805   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:47.167822   27348 pod_ready.go:81] duration metric: took 400.736228ms for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.167832   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.363074   27348 request.go:629] Waited for 195.190772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:25:47.363123   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:25:47.363127   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.363135   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.363139   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.367498   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:47.562667   27348 request.go:629] Waited for 194.34216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.562723   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.562730   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.562745   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.562757   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.565715   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:47.566492   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:47.566512   27348 pod_ready.go:81] duration metric: took 398.672611ms for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.566525   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.762691   27348 request.go:629] Waited for 196.105188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:25:47.762740   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:25:47.762745   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.762752   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.762756   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.766268   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.963224   27348 request.go:629] Waited for 195.69257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.963292   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.963298   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.963305   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.963309   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.966811   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.967555   27348 pod_ready.go:92] pod "kube-proxy-9q4nx" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:47.967572   27348 pod_ready.go:81] duration metric: took 401.040932ms for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.967580   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.162725   27348 request.go:629] Waited for 195.082384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:25:48.162808   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:25:48.162819   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.162831   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.162843   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.166291   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:48.362494   27348 request.go:629] Waited for 195.31511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.362542   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.362547   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.362554   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.362559   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.365792   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:48.366668   27348 pod_ready.go:92] pod "kube-proxy-qd8kk" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:48.366687   27348 pod_ready.go:81] duration metric: took 399.101448ms for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.366696   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.562806   27348 request.go:629] Waited for 196.046058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:25:48.562891   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:25:48.562899   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.562911   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.562918   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.568798   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:25:48.763166   27348 request.go:629] Waited for 193.493235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.763221   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.763226   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.763233   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.763237   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.767000   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:48.768213   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:48.768228   27348 pod_ready.go:81] duration metric: took 401.526784ms for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.768243   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.962362   27348 request.go:629] Waited for 194.037483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:25:48.962435   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:25:48.962442   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.962459   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.962466   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.966428   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.162504   27348 request.go:629] Waited for 195.350806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:49.162548   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:49.162553   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.162560   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.162580   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.166179   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.166738   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:49.166754   27348 pod_ready.go:81] duration metric: took 398.50231ms for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:49.166767   27348 pod_ready.go:38] duration metric: took 5.999479071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:25:49.166790   27348 api_server.go:52] waiting for apiserver process to appear ...
	I0319 19:25:49.166842   27348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:25:49.185557   27348 api_server.go:72] duration metric: took 14.388418616s to wait for apiserver process to appear ...
	I0319 19:25:49.185578   27348 api_server.go:88] waiting for apiserver healthz status ...
	I0319 19:25:49.185592   27348 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0319 19:25:49.189987   27348 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0319 19:25:49.190050   27348 round_trippers.go:463] GET https://192.168.39.200:8443/version
	I0319 19:25:49.190061   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.190072   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.190085   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.192774   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:49.193056   27348 api_server.go:141] control plane version: v1.29.3
	I0319 19:25:49.193080   27348 api_server.go:131] duration metric: took 7.495817ms to wait for apiserver health ...
	I0319 19:25:49.193088   27348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 19:25:49.362388   27348 request.go:629] Waited for 169.242058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.362478   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.362489   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.362499   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.362509   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.368327   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:25:49.373508   27348 system_pods.go:59] 17 kube-system pods found
	I0319 19:25:49.373533   27348 system_pods.go:61] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:25:49.373538   27348 system_pods.go:61] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:25:49.373541   27348 system_pods.go:61] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:25:49.373546   27348 system_pods.go:61] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:25:49.373549   27348 system_pods.go:61] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:25:49.373552   27348 system_pods.go:61] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:25:49.373555   27348 system_pods.go:61] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:25:49.373559   27348 system_pods.go:61] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:25:49.373562   27348 system_pods.go:61] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:25:49.373565   27348 system_pods.go:61] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:25:49.373568   27348 system_pods.go:61] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:25:49.373570   27348 system_pods.go:61] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:25:49.373573   27348 system_pods.go:61] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:25:49.373579   27348 system_pods.go:61] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:25:49.373582   27348 system_pods.go:61] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:25:49.373584   27348 system_pods.go:61] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:25:49.373587   27348 system_pods.go:61] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:25:49.373592   27348 system_pods.go:74] duration metric: took 180.499021ms to wait for pod list to return data ...
	I0319 19:25:49.373601   27348 default_sa.go:34] waiting for default service account to be created ...
	I0319 19:25:49.563023   27348 request.go:629] Waited for 189.357435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:25:49.563077   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:25:49.563082   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.563090   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.563095   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.566918   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.567127   27348 default_sa.go:45] found service account: "default"
	I0319 19:25:49.567143   27348 default_sa.go:55] duration metric: took 193.536936ms for default service account to be created ...
	I0319 19:25:49.567151   27348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 19:25:49.762527   27348 request.go:629] Waited for 195.314439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.762604   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.762612   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.762621   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.762629   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.769072   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:25:49.775148   27348 system_pods.go:86] 17 kube-system pods found
	I0319 19:25:49.775172   27348 system_pods.go:89] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:25:49.775178   27348 system_pods.go:89] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:25:49.775181   27348 system_pods.go:89] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:25:49.775186   27348 system_pods.go:89] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:25:49.775189   27348 system_pods.go:89] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:25:49.775193   27348 system_pods.go:89] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:25:49.775196   27348 system_pods.go:89] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:25:49.775202   27348 system_pods.go:89] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:25:49.775208   27348 system_pods.go:89] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:25:49.775214   27348 system_pods.go:89] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:25:49.775223   27348 system_pods.go:89] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:25:49.775234   27348 system_pods.go:89] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:25:49.775249   27348 system_pods.go:89] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:25:49.775255   27348 system_pods.go:89] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:25:49.775259   27348 system_pods.go:89] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:25:49.775266   27348 system_pods.go:89] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:25:49.775272   27348 system_pods.go:89] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:25:49.775282   27348 system_pods.go:126] duration metric: took 208.126231ms to wait for k8s-apps to be running ...
	I0319 19:25:49.775290   27348 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 19:25:49.775338   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:25:49.793710   27348 system_svc.go:56] duration metric: took 18.411138ms WaitForService to wait for kubelet
	I0319 19:25:49.793746   27348 kubeadm.go:576] duration metric: took 14.996608122s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:25:49.793771   27348 node_conditions.go:102] verifying NodePressure condition ...
	I0319 19:25:49.963171   27348 request.go:629] Waited for 169.333432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0319 19:25:49.963267   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0319 19:25:49.963280   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.963291   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.963294   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.967076   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.967857   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:25:49.967876   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:25:49.967886   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:25:49.967890   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:25:49.967895   27348 node_conditions.go:105] duration metric: took 174.117975ms to run NodePressure ...
	I0319 19:25:49.967905   27348 start.go:240] waiting for startup goroutines ...
	I0319 19:25:49.967926   27348 start.go:254] writing updated cluster config ...
	I0319 19:25:49.970258   27348 out.go:177] 
	I0319 19:25:49.972154   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:25:49.972283   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:25:49.973965   27348 out.go:177] * Starting "ha-218762-m03" control-plane node in "ha-218762" cluster
	I0319 19:25:49.975069   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:25:49.975087   27348 cache.go:56] Caching tarball of preloaded images
	I0319 19:25:49.975176   27348 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:25:49.975188   27348 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:25:49.975280   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:25:49.975459   27348 start.go:360] acquireMachinesLock for ha-218762-m03: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:25:49.975507   27348 start.go:364] duration metric: took 25.079µs to acquireMachinesLock for "ha-218762-m03"
	I0319 19:25:49.975530   27348 start.go:93] Provisioning new machine with config: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:49.975628   27348 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0319 19:25:49.977206   27348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 19:25:49.977288   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:25:49.977325   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:25:49.991624   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39853
	I0319 19:25:49.992012   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:25:49.992436   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:25:49.992454   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:25:49.992764   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:25:49.992974   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:25:49.993124   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:25:49.993270   27348 start.go:159] libmachine.API.Create for "ha-218762" (driver="kvm2")
	I0319 19:25:49.993292   27348 client.go:168] LocalClient.Create starting
	I0319 19:25:49.993317   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:25:49.993344   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:25:49.993357   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:25:49.993409   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:25:49.993428   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:25:49.993441   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:25:49.993459   27348 main.go:141] libmachine: Running pre-create checks...
	I0319 19:25:49.993466   27348 main.go:141] libmachine: (ha-218762-m03) Calling .PreCreateCheck
	I0319 19:25:49.993637   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetConfigRaw
	I0319 19:25:49.994007   27348 main.go:141] libmachine: Creating machine...
	I0319 19:25:49.994020   27348 main.go:141] libmachine: (ha-218762-m03) Calling .Create
	I0319 19:25:49.994160   27348 main.go:141] libmachine: (ha-218762-m03) Creating KVM machine...
	I0319 19:25:49.995282   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found existing default KVM network
	I0319 19:25:49.995401   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found existing private KVM network mk-ha-218762
	I0319 19:25:49.995556   27348 main.go:141] libmachine: (ha-218762-m03) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03 ...
	I0319 19:25:49.995582   27348 main.go:141] libmachine: (ha-218762-m03) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:25:49.995625   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:49.995537   28122 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:25:49.995726   27348 main.go:141] libmachine: (ha-218762-m03) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:25:50.216991   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:50.216859   28122 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa...
	I0319 19:25:50.331847   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:50.331748   28122 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/ha-218762-m03.rawdisk...
	I0319 19:25:50.331870   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Writing magic tar header
	I0319 19:25:50.331887   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Writing SSH key tar header
	I0319 19:25:50.331963   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:50.331903   28122 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03 ...
	I0319 19:25:50.332068   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03 (perms=drwx------)
	I0319 19:25:50.332081   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03
	I0319 19:25:50.332088   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:25:50.332099   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:25:50.332105   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:25:50.332112   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:25:50.332127   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:25:50.332142   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:25:50.332162   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:25:50.332174   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:25:50.332180   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:25:50.332188   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home
	I0319 19:25:50.332193   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Skipping /home - not owner
	I0319 19:25:50.332202   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:25:50.332216   27348 main.go:141] libmachine: (ha-218762-m03) Creating domain...
	I0319 19:25:50.333206   27348 main.go:141] libmachine: (ha-218762-m03) define libvirt domain using xml: 
	I0319 19:25:50.333233   27348 main.go:141] libmachine: (ha-218762-m03) <domain type='kvm'>
	I0319 19:25:50.333244   27348 main.go:141] libmachine: (ha-218762-m03)   <name>ha-218762-m03</name>
	I0319 19:25:50.333260   27348 main.go:141] libmachine: (ha-218762-m03)   <memory unit='MiB'>2200</memory>
	I0319 19:25:50.333268   27348 main.go:141] libmachine: (ha-218762-m03)   <vcpu>2</vcpu>
	I0319 19:25:50.333276   27348 main.go:141] libmachine: (ha-218762-m03)   <features>
	I0319 19:25:50.333284   27348 main.go:141] libmachine: (ha-218762-m03)     <acpi/>
	I0319 19:25:50.333294   27348 main.go:141] libmachine: (ha-218762-m03)     <apic/>
	I0319 19:25:50.333314   27348 main.go:141] libmachine: (ha-218762-m03)     <pae/>
	I0319 19:25:50.333326   27348 main.go:141] libmachine: (ha-218762-m03)     
	I0319 19:25:50.333333   27348 main.go:141] libmachine: (ha-218762-m03)   </features>
	I0319 19:25:50.333341   27348 main.go:141] libmachine: (ha-218762-m03)   <cpu mode='host-passthrough'>
	I0319 19:25:50.333346   27348 main.go:141] libmachine: (ha-218762-m03)   
	I0319 19:25:50.333353   27348 main.go:141] libmachine: (ha-218762-m03)   </cpu>
	I0319 19:25:50.333359   27348 main.go:141] libmachine: (ha-218762-m03)   <os>
	I0319 19:25:50.333363   27348 main.go:141] libmachine: (ha-218762-m03)     <type>hvm</type>
	I0319 19:25:50.333371   27348 main.go:141] libmachine: (ha-218762-m03)     <boot dev='cdrom'/>
	I0319 19:25:50.333376   27348 main.go:141] libmachine: (ha-218762-m03)     <boot dev='hd'/>
	I0319 19:25:50.333382   27348 main.go:141] libmachine: (ha-218762-m03)     <bootmenu enable='no'/>
	I0319 19:25:50.333394   27348 main.go:141] libmachine: (ha-218762-m03)   </os>
	I0319 19:25:50.333401   27348 main.go:141] libmachine: (ha-218762-m03)   <devices>
	I0319 19:25:50.333412   27348 main.go:141] libmachine: (ha-218762-m03)     <disk type='file' device='cdrom'>
	I0319 19:25:50.333423   27348 main.go:141] libmachine: (ha-218762-m03)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/boot2docker.iso'/>
	I0319 19:25:50.333431   27348 main.go:141] libmachine: (ha-218762-m03)       <target dev='hdc' bus='scsi'/>
	I0319 19:25:50.333436   27348 main.go:141] libmachine: (ha-218762-m03)       <readonly/>
	I0319 19:25:50.333443   27348 main.go:141] libmachine: (ha-218762-m03)     </disk>
	I0319 19:25:50.333449   27348 main.go:141] libmachine: (ha-218762-m03)     <disk type='file' device='disk'>
	I0319 19:25:50.333458   27348 main.go:141] libmachine: (ha-218762-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:25:50.333471   27348 main.go:141] libmachine: (ha-218762-m03)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/ha-218762-m03.rawdisk'/>
	I0319 19:25:50.333480   27348 main.go:141] libmachine: (ha-218762-m03)       <target dev='hda' bus='virtio'/>
	I0319 19:25:50.333488   27348 main.go:141] libmachine: (ha-218762-m03)     </disk>
	I0319 19:25:50.333493   27348 main.go:141] libmachine: (ha-218762-m03)     <interface type='network'>
	I0319 19:25:50.333501   27348 main.go:141] libmachine: (ha-218762-m03)       <source network='mk-ha-218762'/>
	I0319 19:25:50.333508   27348 main.go:141] libmachine: (ha-218762-m03)       <model type='virtio'/>
	I0319 19:25:50.333514   27348 main.go:141] libmachine: (ha-218762-m03)     </interface>
	I0319 19:25:50.333521   27348 main.go:141] libmachine: (ha-218762-m03)     <interface type='network'>
	I0319 19:25:50.333527   27348 main.go:141] libmachine: (ha-218762-m03)       <source network='default'/>
	I0319 19:25:50.333534   27348 main.go:141] libmachine: (ha-218762-m03)       <model type='virtio'/>
	I0319 19:25:50.333539   27348 main.go:141] libmachine: (ha-218762-m03)     </interface>
	I0319 19:25:50.333544   27348 main.go:141] libmachine: (ha-218762-m03)     <serial type='pty'>
	I0319 19:25:50.333556   27348 main.go:141] libmachine: (ha-218762-m03)       <target port='0'/>
	I0319 19:25:50.333569   27348 main.go:141] libmachine: (ha-218762-m03)     </serial>
	I0319 19:25:50.333587   27348 main.go:141] libmachine: (ha-218762-m03)     <console type='pty'>
	I0319 19:25:50.333605   27348 main.go:141] libmachine: (ha-218762-m03)       <target type='serial' port='0'/>
	I0319 19:25:50.333616   27348 main.go:141] libmachine: (ha-218762-m03)     </console>
	I0319 19:25:50.333626   27348 main.go:141] libmachine: (ha-218762-m03)     <rng model='virtio'>
	I0319 19:25:50.333637   27348 main.go:141] libmachine: (ha-218762-m03)       <backend model='random'>/dev/random</backend>
	I0319 19:25:50.333649   27348 main.go:141] libmachine: (ha-218762-m03)     </rng>
	I0319 19:25:50.333661   27348 main.go:141] libmachine: (ha-218762-m03)     
	I0319 19:25:50.333670   27348 main.go:141] libmachine: (ha-218762-m03)     
	I0319 19:25:50.333681   27348 main.go:141] libmachine: (ha-218762-m03)   </devices>
	I0319 19:25:50.333691   27348 main.go:141] libmachine: (ha-218762-m03) </domain>
	I0319 19:25:50.333703   27348 main.go:141] libmachine: (ha-218762-m03) 
	I0319 19:25:50.340864   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:b1:6c:94 in network default
	I0319 19:25:50.341477   27348 main.go:141] libmachine: (ha-218762-m03) Ensuring networks are active...
	I0319 19:25:50.341503   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:50.342231   27348 main.go:141] libmachine: (ha-218762-m03) Ensuring network default is active
	I0319 19:25:50.342629   27348 main.go:141] libmachine: (ha-218762-m03) Ensuring network mk-ha-218762 is active
	I0319 19:25:50.343095   27348 main.go:141] libmachine: (ha-218762-m03) Getting domain xml...
	I0319 19:25:50.343830   27348 main.go:141] libmachine: (ha-218762-m03) Creating domain...
	I0319 19:25:51.553932   27348 main.go:141] libmachine: (ha-218762-m03) Waiting to get IP...
	I0319 19:25:51.554758   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:51.555276   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:51.555306   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:51.555246   28122 retry.go:31] will retry after 284.654431ms: waiting for machine to come up
	I0319 19:25:51.841781   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:51.842213   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:51.842243   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:51.842162   28122 retry.go:31] will retry after 359.163065ms: waiting for machine to come up
	I0319 19:25:52.202706   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:52.203142   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:52.203171   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:52.203106   28122 retry.go:31] will retry after 305.30754ms: waiting for machine to come up
	I0319 19:25:52.510504   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:52.511008   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:52.511046   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:52.510981   28122 retry.go:31] will retry after 389.598505ms: waiting for machine to come up
	I0319 19:25:52.902345   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:52.902769   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:52.902792   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:52.902733   28122 retry.go:31] will retry after 706.518988ms: waiting for machine to come up
	I0319 19:25:53.610433   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:53.610863   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:53.610898   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:53.610818   28122 retry.go:31] will retry after 837.390706ms: waiting for machine to come up
	I0319 19:25:54.449569   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:54.449995   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:54.450022   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:54.449947   28122 retry.go:31] will retry after 1.115275188s: waiting for machine to come up
	I0319 19:25:55.567420   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:55.567784   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:55.567803   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:55.567738   28122 retry.go:31] will retry after 1.214137992s: waiting for machine to come up
	I0319 19:25:56.782933   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:56.783274   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:56.783322   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:56.783235   28122 retry.go:31] will retry after 1.594483272s: waiting for machine to come up
	I0319 19:25:58.378826   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:58.379221   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:58.379241   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:58.379186   28122 retry.go:31] will retry after 2.286199759s: waiting for machine to come up
	I0319 19:26:00.667332   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:00.667750   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:00.667816   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:00.667737   28122 retry.go:31] will retry after 1.954108791s: waiting for machine to come up
	I0319 19:26:02.622969   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:02.623461   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:02.623493   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:02.623410   28122 retry.go:31] will retry after 3.05464745s: waiting for machine to come up
	I0319 19:26:05.679695   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:05.680198   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:05.680214   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:05.680169   28122 retry.go:31] will retry after 2.868429032s: waiting for machine to come up
	I0319 19:26:08.550173   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:08.550630   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:08.550651   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:08.550599   28122 retry.go:31] will retry after 3.589077433s: waiting for machine to come up
	I0319 19:26:12.141536   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.142056   27348 main.go:141] libmachine: (ha-218762-m03) Found IP for machine: 192.168.39.15
	I0319 19:26:12.142075   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has current primary IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.142081   27348 main.go:141] libmachine: (ha-218762-m03) Reserving static IP address...
	I0319 19:26:12.142497   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find host DHCP lease matching {name: "ha-218762-m03", mac: "52:54:00:13:34:f4", ip: "192.168.39.15"} in network mk-ha-218762
	I0319 19:26:12.214606   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Getting to WaitForSSH function...
	I0319 19:26:12.214634   27348 main.go:141] libmachine: (ha-218762-m03) Reserved static IP address: 192.168.39.15
	I0319 19:26:12.214647   27348 main.go:141] libmachine: (ha-218762-m03) Waiting for SSH to be available...
	I0319 19:26:12.218432   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.218787   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.218818   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.218945   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Using SSH client type: external
	I0319 19:26:12.218973   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa (-rw-------)
	I0319 19:26:12.219019   27348 main.go:141] libmachine: (ha-218762-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:26:12.219037   27348 main.go:141] libmachine: (ha-218762-m03) DBG | About to run SSH command:
	I0319 19:26:12.219049   27348 main.go:141] libmachine: (ha-218762-m03) DBG | exit 0
	I0319 19:26:12.344778   27348 main.go:141] libmachine: (ha-218762-m03) DBG | SSH cmd err, output: <nil>: 
	I0319 19:26:12.345044   27348 main.go:141] libmachine: (ha-218762-m03) KVM machine creation complete!
	I0319 19:26:12.345383   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetConfigRaw
	I0319 19:26:12.345883   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:12.346058   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:12.346216   27348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:26:12.346229   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:26:12.347581   27348 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:26:12.347598   27348 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:26:12.347605   27348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:26:12.347615   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.349863   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.350216   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.350246   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.350379   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.350526   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.350644   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.350760   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.350906   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.351130   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.351142   27348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:26:12.460078   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:26:12.460108   27348 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:26:12.460120   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.462835   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.463254   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.463283   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.463399   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.463593   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.463725   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.463921   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.464098   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.464288   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.464300   27348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:26:12.573659   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:26:12.573752   27348 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:26:12.573767   27348 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:26:12.573777   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:26:12.574032   27348 buildroot.go:166] provisioning hostname "ha-218762-m03"
	I0319 19:26:12.574062   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:26:12.574253   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.576834   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.577126   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.577162   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.577301   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.577475   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.577636   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.577759   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.577897   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.578088   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.578104   27348 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762-m03 && echo "ha-218762-m03" | sudo tee /etc/hostname
	I0319 19:26:12.704916   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762-m03
	
	I0319 19:26:12.704997   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.707926   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.708306   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.708342   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.708604   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.708811   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.708970   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.709121   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.709275   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.709482   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.709500   27348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:26:12.831414   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:26:12.831441   27348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:26:12.831460   27348 buildroot.go:174] setting up certificates
	I0319 19:26:12.831470   27348 provision.go:84] configureAuth start
	I0319 19:26:12.831479   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:26:12.831730   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:12.834298   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.834620   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.834649   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.834762   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.836964   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.837332   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.837352   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.837490   27348 provision.go:143] copyHostCerts
	I0319 19:26:12.837521   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:26:12.837557   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:26:12.837574   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:26:12.837652   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:26:12.837737   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:26:12.837764   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:26:12.837774   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:26:12.837810   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:26:12.837875   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:26:12.837903   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:26:12.837912   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:26:12.837945   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:26:12.838007   27348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762-m03 san=[127.0.0.1 192.168.39.15 ha-218762-m03 localhost minikube]
	I0319 19:26:12.934552   27348 provision.go:177] copyRemoteCerts
	I0319 19:26:12.934612   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:26:12.934639   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.936994   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.937362   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.937393   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.937614   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.937799   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.937972   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.938111   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.022899   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:26:13.022975   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:26:13.051348   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:26:13.051479   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0319 19:26:13.084387   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:26:13.084455   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 19:26:13.114451   27348 provision.go:87] duration metric: took 282.970424ms to configureAuth
	I0319 19:26:13.114475   27348 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:26:13.114700   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:26:13.114803   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.117440   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.117827   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.117858   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.118034   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.118209   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.118386   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.118525   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.118720   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:13.118877   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:13.118891   27348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:26:13.406083   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:26:13.406114   27348 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:26:13.406124   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetURL
	I0319 19:26:13.407443   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Using libvirt version 6000000
	I0319 19:26:13.409759   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.410173   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.410205   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.410365   27348 main.go:141] libmachine: Docker is up and running!
	I0319 19:26:13.410379   27348 main.go:141] libmachine: Reticulating splines...
	I0319 19:26:13.410387   27348 client.go:171] duration metric: took 23.417086044s to LocalClient.Create
	I0319 19:26:13.410415   27348 start.go:167] duration metric: took 23.417138448s to libmachine.API.Create "ha-218762"
	I0319 19:26:13.410428   27348 start.go:293] postStartSetup for "ha-218762-m03" (driver="kvm2")
	I0319 19:26:13.410445   27348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:26:13.410465   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.410681   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:26:13.410703   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.413029   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.413375   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.413432   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.413545   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.413712   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.413878   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.414049   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.500637   27348 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:26:13.505817   27348 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:26:13.505836   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:26:13.505890   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:26:13.505987   27348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:26:13.506001   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:26:13.506081   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:26:13.518153   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:26:13.547511   27348 start.go:296] duration metric: took 137.067109ms for postStartSetup
	I0319 19:26:13.547556   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetConfigRaw
	I0319 19:26:13.548127   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:13.550736   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.551082   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.551112   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.551340   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:26:13.551514   27348 start.go:128] duration metric: took 23.575877277s to createHost
	I0319 19:26:13.551535   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.553622   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.554004   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.554024   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.554209   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.554386   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.554556   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.554698   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.554949   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:13.555163   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:13.555178   27348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:26:13.661985   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876373.630389219
	
	I0319 19:26:13.662007   27348 fix.go:216] guest clock: 1710876373.630389219
	I0319 19:26:13.662014   27348 fix.go:229] Guest: 2024-03-19 19:26:13.630389219 +0000 UTC Remote: 2024-03-19 19:26:13.551525669 +0000 UTC m=+180.018181109 (delta=78.86355ms)
	I0319 19:26:13.662029   27348 fix.go:200] guest clock delta is within tolerance: 78.86355ms
	I0319 19:26:13.662037   27348 start.go:83] releasing machines lock for "ha-218762-m03", held for 23.68651518s
	I0319 19:26:13.662052   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.662326   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:13.664748   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.665095   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.665124   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.667018   27348 out.go:177] * Found network options:
	I0319 19:26:13.668287   27348 out.go:177]   - NO_PROXY=192.168.39.200,192.168.39.234
	W0319 19:26:13.669507   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	W0319 19:26:13.669530   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:26:13.669540   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.670019   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.670194   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.670297   27348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:26:13.670352   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	W0319 19:26:13.670367   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	W0319 19:26:13.670393   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:26:13.670460   27348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:26:13.670479   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.672959   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673176   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673292   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.673315   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673497   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.673613   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.673653   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673682   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.673809   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.673874   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.674015   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.674007   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.674186   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.674341   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.928223   27348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:26:13.935179   27348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:26:13.935283   27348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:26:13.953260   27348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:26:13.953279   27348 start.go:494] detecting cgroup driver to use...
	I0319 19:26:13.953343   27348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:26:13.969520   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:26:13.984872   27348 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:26:13.985266   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:26:14.001173   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:26:14.015535   27348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:26:14.144819   27348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:26:14.324903   27348 docker.go:233] disabling docker service ...
	I0319 19:26:14.324974   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:26:14.339822   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:26:14.353753   27348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:26:14.489408   27348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:26:14.622800   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:26:14.639326   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:26:14.660342   27348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:26:14.660412   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.672540   27348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:26:14.672589   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.684564   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.696391   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.709007   27348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:26:14.721133   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.733438   27348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.752796   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.764903   27348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:26:14.776035   27348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:26:14.776089   27348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:26:14.794027   27348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:26:14.806482   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:26:14.943339   27348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:26:15.100311   27348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:26:15.100390   27348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:26:15.106098   27348 start.go:562] Will wait 60s for crictl version
	I0319 19:26:15.106151   27348 ssh_runner.go:195] Run: which crictl
	I0319 19:26:15.111443   27348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:26:15.157129   27348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:26:15.157193   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:26:15.186981   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:26:15.226072   27348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:26:15.227676   27348 out.go:177]   - env NO_PROXY=192.168.39.200
	I0319 19:26:15.229271   27348 out.go:177]   - env NO_PROXY=192.168.39.200,192.168.39.234
	I0319 19:26:15.230655   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:15.233117   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:15.233496   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:15.233528   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:15.233703   27348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:26:15.238689   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:26:15.252544   27348 mustload.go:65] Loading cluster: ha-218762
	I0319 19:26:15.252808   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:26:15.253071   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:26:15.253106   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:26:15.268729   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I0319 19:26:15.269067   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:26:15.269517   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:26:15.269539   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:26:15.269857   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:26:15.270045   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:26:15.271600   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:26:15.271925   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:26:15.271961   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:26:15.286223   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0319 19:26:15.286670   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:26:15.287128   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:26:15.287148   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:26:15.287462   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:26:15.287643   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:26:15.287787   27348 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.15
	I0319 19:26:15.287800   27348 certs.go:194] generating shared ca certs ...
	I0319 19:26:15.287817   27348 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:26:15.287938   27348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:26:15.287975   27348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:26:15.287984   27348 certs.go:256] generating profile certs ...
	I0319 19:26:15.288049   27348 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:26:15.288071   27348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953
	I0319 19:26:15.288085   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.15 192.168.39.254]
	I0319 19:26:15.441633   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953 ...
	I0319 19:26:15.441667   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953: {Name:mk13010d0a9c760f910acf1d4c93353a08108724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:26:15.441876   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953 ...
	I0319 19:26:15.441896   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953: {Name:mk956539c2e7a7a2a428fbbe80d4ebfa29546d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:26:15.441976   27348 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:26:15.442099   27348 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:26:15.442208   27348 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:26:15.442223   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:26:15.442236   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:26:15.442248   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:26:15.442261   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:26:15.442273   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:26:15.442285   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:26:15.442298   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:26:15.442310   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:26:15.442356   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:26:15.442384   27348 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:26:15.442394   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:26:15.442413   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:26:15.442432   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:26:15.442452   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:26:15.442487   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:26:15.442510   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:26:15.442525   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:15.442537   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:26:15.442565   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:26:15.445504   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:15.445888   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:26:15.445918   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:15.446064   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:26:15.446252   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:26:15.446454   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:26:15.446592   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:26:15.524606   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0319 19:26:15.531378   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0319 19:26:15.544911   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0319 19:26:15.550115   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0319 19:26:15.562493   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0319 19:26:15.567246   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0319 19:26:15.579084   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0319 19:26:15.583737   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0319 19:26:15.596703   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0319 19:26:15.601709   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0319 19:26:15.615954   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0319 19:26:15.622514   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0319 19:26:15.634627   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:26:15.664312   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:26:15.692121   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:26:15.721511   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:26:15.750156   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0319 19:26:15.777506   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:26:15.804253   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:26:15.832217   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:26:15.860142   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:26:15.888811   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:26:15.916964   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:26:15.948323   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0319 19:26:15.967404   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0319 19:26:15.986570   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0319 19:26:16.005531   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0319 19:26:16.023989   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0319 19:26:16.043240   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0319 19:26:16.061549   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0319 19:26:16.079922   27348 ssh_runner.go:195] Run: openssl version
	I0319 19:26:16.086160   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:26:16.097602   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:26:16.102718   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:26:16.102761   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:26:16.109495   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:26:16.121439   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:26:16.133574   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:26:16.138951   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:26:16.139008   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:26:16.146150   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:26:16.159625   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:26:16.171388   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:16.176430   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:16.176480   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:16.183402   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:26:16.196048   27348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:26:16.201339   27348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:26:16.201398   27348 kubeadm.go:928] updating node {m03 192.168.39.15 8443 v1.29.3 crio true true} ...
	I0319 19:26:16.201542   27348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:26:16.201580   27348 kube-vip.go:111] generating kube-vip config ...
	I0319 19:26:16.201614   27348 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:26:16.218682   27348 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:26:16.218754   27348 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:26:16.218798   27348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:26:16.229760   27348 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0319 19:26:16.229810   27348 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0319 19:26:16.240616   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0319 19:26:16.240641   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:26:16.240648   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0319 19:26:16.240692   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:26:16.240706   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:26:16.240648   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0319 19:26:16.240746   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:26:16.240808   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:26:16.259208   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0319 19:26:16.259247   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0319 19:26:16.259277   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:26:16.259326   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0319 19:26:16.259353   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0319 19:26:16.259368   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:26:16.295855   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0319 19:26:16.295909   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0319 19:26:17.319764   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0319 19:26:17.330123   27348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0319 19:26:17.348726   27348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:26:17.369990   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:26:17.388196   27348 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:26:17.392454   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:26:17.406789   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:26:17.545568   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:26:17.566583   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:26:17.567026   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:26:17.567076   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:26:17.583385   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0319 19:26:17.583844   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:26:17.584407   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:26:17.584429   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:26:17.584834   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:26:17.585046   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:26:17.585246   27348 start.go:316] joinCluster: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:26:17.585368   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0319 19:26:17.585392   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:26:17.588998   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:17.589437   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:26:17.589464   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:17.589673   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:26:17.589837   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:26:17.590005   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:26:17.590151   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:26:17.766429   27348 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:26:17.766497   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 52r08t.wwjtmsr7pzpgtkbh --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m03 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443"
	I0319 19:26:45.630807   27348 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 52r08t.wwjtmsr7pzpgtkbh --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m03 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443": (27.864280885s)
	I0319 19:26:45.630853   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0319 19:26:46.154388   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-218762-m03 minikube.k8s.io/updated_at=2024_03_19T19_26_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=ha-218762 minikube.k8s.io/primary=false
	I0319 19:26:46.298480   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-218762-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0319 19:26:46.420982   27348 start.go:318] duration metric: took 28.835732463s to joinCluster
	I0319 19:26:46.421054   27348 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:26:46.422736   27348 out.go:177] * Verifying Kubernetes components...
	I0319 19:26:46.421378   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:26:46.424321   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:26:46.611052   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:26:46.631995   27348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:26:46.632245   27348 kapi.go:59] client config for ha-218762: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt", KeyFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key", CAFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0319 19:26:46.632330   27348 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.200:8443
	I0319 19:26:46.632546   27348 node_ready.go:35] waiting up to 6m0s for node "ha-218762-m03" to be "Ready" ...
	I0319 19:26:46.632625   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:46.632636   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:46.632646   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:46.632652   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:46.639133   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:26:47.133488   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:47.133507   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:47.133515   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:47.133519   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:47.138654   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:26:47.633532   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:47.633558   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:47.633570   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:47.633577   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:47.637113   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:48.133352   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:48.133375   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:48.133393   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:48.133397   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:48.137756   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:48.633037   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:48.633058   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:48.633066   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:48.633071   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:48.636541   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:48.637623   27348 node_ready.go:53] node "ha-218762-m03" has status "Ready":"False"
	I0319 19:26:49.133413   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:49.133435   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:49.133442   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:49.133445   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:49.137088   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:49.632789   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:49.632812   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:49.632822   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:49.632829   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:49.636543   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:50.133628   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:50.133653   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:50.133664   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:50.133672   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:50.137730   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:50.632852   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:50.632878   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:50.632886   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:50.632891   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:50.637236   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:50.637967   27348 node_ready.go:53] node "ha-218762-m03" has status "Ready":"False"
	I0319 19:26:51.133457   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.133476   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.133484   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.133488   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.137938   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:51.138969   27348 node_ready.go:49] node "ha-218762-m03" has status "Ready":"True"
	I0319 19:26:51.138993   27348 node_ready.go:38] duration metric: took 4.506429244s for node "ha-218762-m03" to be "Ready" ...
	I0319 19:26:51.139004   27348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:26:51.139076   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:51.139091   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.139101   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.139137   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.147461   27348 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0319 19:26:51.155166   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.155255   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-6f64w
	I0319 19:26:51.155264   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.155275   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.155290   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.159040   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.159721   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:51.159732   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.159740   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.159745   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.162768   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.163370   27348 pod_ready.go:92] pod "coredns-76f75df574-6f64w" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.163396   27348 pod_ready.go:81] duration metric: took 8.210221ms for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.163409   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.163463   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zlz9l
	I0319 19:26:51.163478   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.163489   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.163498   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.166372   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:51.167572   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:51.167592   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.167602   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.167609   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.170689   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.172009   27348 pod_ready.go:92] pod "coredns-76f75df574-zlz9l" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.172028   27348 pod_ready.go:81] duration metric: took 8.611518ms for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.172039   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.172097   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762
	I0319 19:26:51.172108   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.172126   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.172133   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.177503   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:26:51.178242   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:51.178262   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.178272   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.178281   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.181251   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:51.181795   27348 pod_ready.go:92] pod "etcd-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.181812   27348 pod_ready.go:81] duration metric: took 9.765614ms for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.181824   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.181882   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:26:51.181893   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.181904   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.181914   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.185047   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.185984   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:51.186000   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.186009   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.186018   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.188764   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:51.189465   27348 pod_ready.go:92] pod "etcd-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.189486   27348 pod_ready.go:81] duration metric: took 7.65385ms for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.189497   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.333889   27348 request.go:629] Waited for 144.336477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:51.333963   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:51.333968   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.333976   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.333980   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.338092   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:51.533489   27348 request.go:629] Waited for 194.302117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.533558   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.533565   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.533575   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.533584   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.538258   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:51.734359   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:51.734384   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.734394   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.734398   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.737714   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.933950   27348 request.go:629] Waited for 195.350331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.934005   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.934012   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.934022   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.934030   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.938611   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:52.190370   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:52.190390   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.190398   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.190402   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.193935   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:52.334001   27348 request.go:629] Waited for 139.295294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:52.334055   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:52.334060   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.334069   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.334075   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.338182   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:52.690574   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:52.690597   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.690607   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.690612   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.694318   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:52.734356   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:52.734376   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.734385   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.734389   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.738601   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:53.190460   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:53.190480   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.190488   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.190492   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.193937   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.194715   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:53.194729   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.194738   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.194741   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.197532   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:53.198117   27348 pod_ready.go:92] pod "etcd-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:53.198141   27348 pod_ready.go:81] duration metric: took 2.008636s for pod "etcd-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.198166   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.334507   27348 request.go:629] Waited for 136.268277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762
	I0319 19:26:53.334588   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762
	I0319 19:26:53.334597   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.334604   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.334610   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.338596   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.533650   27348 request.go:629] Waited for 194.288619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:53.533713   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:53.533721   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.533737   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.533747   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.537207   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.537959   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:53.537976   27348 pod_ready.go:81] duration metric: took 339.79836ms for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.537986   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.733766   27348 request.go:629] Waited for 195.72654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m02
	I0319 19:26:53.733838   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m02
	I0319 19:26:53.733843   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.733851   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.733858   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.737663   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.934080   27348 request.go:629] Waited for 195.399867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:53.934139   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:53.934150   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.934160   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.934174   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.938076   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.939070   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:53.939090   27348 pod_ready.go:81] duration metric: took 401.09864ms for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.939100   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.134224   27348 request.go:629] Waited for 195.039747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m03
	I0319 19:26:54.134292   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m03
	I0319 19:26:54.134299   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.134309   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.134320   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.138294   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.333598   27348 request.go:629] Waited for 194.207635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:54.333660   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:54.333665   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.333673   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.333678   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.337576   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.338571   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:54.338596   27348 pod_ready.go:81] duration metric: took 399.487941ms for pod "kube-apiserver-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.338609   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.533566   27348 request.go:629] Waited for 194.895721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:26:54.533624   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:26:54.533641   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.533649   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.533653   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.537341   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.733576   27348 request.go:629] Waited for 195.281354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:54.733628   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:54.733633   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.733641   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.733644   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.737553   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.738615   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:54.738634   27348 pod_ready.go:81] duration metric: took 400.016617ms for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.738644   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.933582   27348 request.go:629] Waited for 194.869012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:26:54.933651   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:26:54.933659   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.933683   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.933706   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.937347   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:55.133988   27348 request.go:629] Waited for 195.812982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.134054   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.134076   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.134087   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.134095   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.139642   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:26:55.141192   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:55.141210   27348 pod_ready.go:81] duration metric: took 402.559898ms for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.141219   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.334384   27348 request.go:629] Waited for 193.094247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m03
	I0319 19:26:55.334433   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m03
	I0319 19:26:55.334438   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.334446   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.334450   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.338550   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:55.533943   27348 request.go:629] Waited for 194.353574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:55.534041   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:55.534052   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.534063   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.534072   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.538659   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:55.539323   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:55.539344   27348 pod_ready.go:81] duration metric: took 398.119009ms for pod "kube-controller-manager-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.539354   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.734474   27348 request.go:629] Waited for 195.058128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:26:55.734549   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:26:55.734554   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.734562   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.734567   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.738483   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:55.933973   27348 request.go:629] Waited for 194.360084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.934022   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.934028   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.934035   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.934038   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.937737   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:55.938631   27348 pod_ready.go:92] pod "kube-proxy-9q4nx" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:55.938650   27348 pod_ready.go:81] duration metric: took 399.289929ms for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.938662   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lq48k" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.133721   27348 request.go:629] Waited for 194.974778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lq48k
	I0319 19:26:56.133783   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lq48k
	I0319 19:26:56.133794   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.133805   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.133816   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.138584   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:56.333975   27348 request.go:629] Waited for 194.387303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:56.334026   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:56.334031   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.334038   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.334042   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.338142   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:56.338829   27348 pod_ready.go:92] pod "kube-proxy-lq48k" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:56.338848   27348 pod_ready.go:81] duration metric: took 400.179335ms for pod "kube-proxy-lq48k" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.338861   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.534017   27348 request.go:629] Waited for 195.058484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:26:56.534068   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:26:56.534073   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.534080   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.534087   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.538077   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:56.734353   27348 request.go:629] Waited for 195.37726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:56.734405   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:56.734411   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.734422   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.734429   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.738349   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:56.739287   27348 pod_ready.go:92] pod "kube-proxy-qd8kk" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:56.739309   27348 pod_ready.go:81] duration metric: took 400.441531ms for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.739320   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.934400   27348 request.go:629] Waited for 195.013252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:26:56.934452   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:26:56.934457   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.934464   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.934468   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.938554   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:57.133706   27348 request.go:629] Waited for 194.293257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:57.133762   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:57.133769   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.133779   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.133784   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.137298   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:57.138027   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:57.138044   27348 pod_ready.go:81] duration metric: took 398.718431ms for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.138053   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.334143   27348 request.go:629] Waited for 196.034183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:26:57.334211   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:26:57.334220   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.334227   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.334234   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.338476   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:57.534511   27348 request.go:629] Waited for 195.364987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:57.534564   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:57.534569   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.534576   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.534583   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.538686   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:57.539325   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:57.539342   27348 pod_ready.go:81] duration metric: took 401.283364ms for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.539351   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.733819   27348 request.go:629] Waited for 194.407592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m03
	I0319 19:26:57.733909   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m03
	I0319 19:26:57.733918   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.733928   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.733938   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.737717   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:57.934273   27348 request.go:629] Waited for 195.656121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:57.934338   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:57.934344   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.934352   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.934360   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.937810   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:57.938582   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:57.938603   27348 pod_ready.go:81] duration metric: took 399.245881ms for pod "kube-scheduler-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.938614   27348 pod_ready.go:38] duration metric: took 6.799598369s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:26:57.938628   27348 api_server.go:52] waiting for apiserver process to appear ...
	I0319 19:26:57.938674   27348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:26:57.958163   27348 api_server.go:72] duration metric: took 11.537075445s to wait for apiserver process to appear ...
	I0319 19:26:57.958185   27348 api_server.go:88] waiting for apiserver healthz status ...
	I0319 19:26:57.958205   27348 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0319 19:26:57.962762   27348 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0319 19:26:57.962812   27348 round_trippers.go:463] GET https://192.168.39.200:8443/version
	I0319 19:26:57.962817   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.962825   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.962830   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.963867   27348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0319 19:26:57.963914   27348 api_server.go:141] control plane version: v1.29.3
	I0319 19:26:57.963931   27348 api_server.go:131] duration metric: took 5.741178ms to wait for apiserver health ...
	I0319 19:26:57.963938   27348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 19:26:58.134365   27348 request.go:629] Waited for 170.338863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.134448   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.134455   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.134465   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.134476   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.142269   27348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0319 19:26:58.149759   27348 system_pods.go:59] 24 kube-system pods found
	I0319 19:26:58.149787   27348 system_pods.go:61] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:26:58.149791   27348 system_pods.go:61] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:26:58.149794   27348 system_pods.go:61] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:26:58.149797   27348 system_pods.go:61] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:26:58.149800   27348 system_pods.go:61] "etcd-ha-218762-m03" [abaf6f38-4d54-46a5-bf59-a31f3e170ff8] Running
	I0319 19:26:58.149803   27348 system_pods.go:61] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:26:58.149806   27348 system_pods.go:61] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:26:58.149809   27348 system_pods.go:61] "kindnet-wv72v" [1ed042d3-e756-4c78-8708-5c5879b8488a] Running
	I0319 19:26:58.149812   27348 system_pods.go:61] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:26:58.149815   27348 system_pods.go:61] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:26:58.149818   27348 system_pods.go:61] "kube-apiserver-ha-218762-m03" [41b039c5-b777-45ea-bceb-74b2536a8a0e] Running
	I0319 19:26:58.149821   27348 system_pods.go:61] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:26:58.149825   27348 system_pods.go:61] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:26:58.149828   27348 system_pods.go:61] "kube-controller-manager-ha-218762-m03" [7a3c20f3-8688-4ff9-b1c6-bf79af946890] Running
	I0319 19:26:58.149831   27348 system_pods.go:61] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:26:58.149835   27348 system_pods.go:61] "kube-proxy-lq48k" [276cdcac-8e8b-4521-9ef0-a83138baa085] Running
	I0319 19:26:58.149838   27348 system_pods.go:61] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:26:58.149841   27348 system_pods.go:61] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:26:58.149844   27348 system_pods.go:61] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:26:58.149847   27348 system_pods.go:61] "kube-scheduler-ha-218762-m03" [ebb4beba-a1e3-40fb-bc25-44ff5c2883b8] Running
	I0319 19:26:58.149850   27348 system_pods.go:61] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:26:58.149853   27348 system_pods.go:61] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:26:58.149855   27348 system_pods.go:61] "kube-vip-ha-218762-m03" [4892ef9c-057c-4361-bb82-f64de67babb0] Running
	I0319 19:26:58.149858   27348 system_pods.go:61] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:26:58.149863   27348 system_pods.go:74] duration metric: took 185.921106ms to wait for pod list to return data ...
	I0319 19:26:58.149873   27348 default_sa.go:34] waiting for default service account to be created ...
	I0319 19:26:58.334270   27348 request.go:629] Waited for 184.336644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:26:58.334325   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:26:58.334331   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.334339   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.334344   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.338204   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:58.338318   27348 default_sa.go:45] found service account: "default"
	I0319 19:26:58.338334   27348 default_sa.go:55] duration metric: took 188.454354ms for default service account to be created ...
	I0319 19:26:58.338348   27348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 19:26:58.533645   27348 request.go:629] Waited for 195.200652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.533699   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.533704   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.533712   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.533715   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.541780   27348 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0319 19:26:58.548496   27348 system_pods.go:86] 24 kube-system pods found
	I0319 19:26:58.548527   27348 system_pods.go:89] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:26:58.548534   27348 system_pods.go:89] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:26:58.548541   27348 system_pods.go:89] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:26:58.548546   27348 system_pods.go:89] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:26:58.548552   27348 system_pods.go:89] "etcd-ha-218762-m03" [abaf6f38-4d54-46a5-bf59-a31f3e170ff8] Running
	I0319 19:26:58.548557   27348 system_pods.go:89] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:26:58.548564   27348 system_pods.go:89] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:26:58.548570   27348 system_pods.go:89] "kindnet-wv72v" [1ed042d3-e756-4c78-8708-5c5879b8488a] Running
	I0319 19:26:58.548575   27348 system_pods.go:89] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:26:58.548581   27348 system_pods.go:89] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:26:58.548589   27348 system_pods.go:89] "kube-apiserver-ha-218762-m03" [41b039c5-b777-45ea-bceb-74b2536a8a0e] Running
	I0319 19:26:58.548595   27348 system_pods.go:89] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:26:58.548605   27348 system_pods.go:89] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:26:58.548611   27348 system_pods.go:89] "kube-controller-manager-ha-218762-m03" [7a3c20f3-8688-4ff9-b1c6-bf79af946890] Running
	I0319 19:26:58.548620   27348 system_pods.go:89] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:26:58.548626   27348 system_pods.go:89] "kube-proxy-lq48k" [276cdcac-8e8b-4521-9ef0-a83138baa085] Running
	I0319 19:26:58.548636   27348 system_pods.go:89] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:26:58.548642   27348 system_pods.go:89] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:26:58.548649   27348 system_pods.go:89] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:26:58.548655   27348 system_pods.go:89] "kube-scheduler-ha-218762-m03" [ebb4beba-a1e3-40fb-bc25-44ff5c2883b8] Running
	I0319 19:26:58.548665   27348 system_pods.go:89] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:26:58.548670   27348 system_pods.go:89] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:26:58.548678   27348 system_pods.go:89] "kube-vip-ha-218762-m03" [4892ef9c-057c-4361-bb82-f64de67babb0] Running
	I0319 19:26:58.548683   27348 system_pods.go:89] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:26:58.548694   27348 system_pods.go:126] duration metric: took 210.337822ms to wait for k8s-apps to be running ...
	I0319 19:26:58.548749   27348 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 19:26:58.548820   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:26:58.568949   27348 system_svc.go:56] duration metric: took 20.19424ms WaitForService to wait for kubelet
	I0319 19:26:58.568974   27348 kubeadm.go:576] duration metric: took 12.147890574s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:26:58.568993   27348 node_conditions.go:102] verifying NodePressure condition ...
	I0319 19:26:58.733856   27348 request.go:629] Waited for 164.801344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0319 19:26:58.733948   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0319 19:26:58.733955   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.733966   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.733973   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.737724   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:58.738825   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:26:58.738845   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:26:58.738854   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:26:58.738858   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:26:58.738863   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:26:58.738867   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:26:58.738873   27348 node_conditions.go:105] duration metric: took 169.875397ms to run NodePressure ...
	I0319 19:26:58.738894   27348 start.go:240] waiting for startup goroutines ...
	I0319 19:26:58.738915   27348 start.go:254] writing updated cluster config ...
	I0319 19:26:58.739236   27348 ssh_runner.go:195] Run: rm -f paused
	I0319 19:26:58.793008   27348 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 19:26:58.795218   27348 out.go:177] * Done! kubectl is now configured to use "ha-218762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.117716438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f2c8feb-8bac-4d46-a463-0ba5cb2ed9ba name=/runtime.v1.RuntimeService/Version
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.119054207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59eb6e70-9dc4-4ada-99a5-e8da511e04b3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.119508733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876630119484938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59eb6e70-9dc4-4ada-99a5-e8da511e04b3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.120045681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6bc9ada-777c-494f-b330-2650e01f700e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.120097891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6bc9ada-777c-494f-b330-2650e01f700e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.120338611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6bc9ada-777c-494f-b330-2650e01f700e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.134728471Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42c9f706-187e-49f2-a496-64e6309ac289 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.135050663Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-d8xsk,Uid:6f5b6f71-8881-4429-a25f-ca62fef2f65c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876420157479282,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:26:59.828745702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-6f64w,Uid:5b250bb2-07f0-46db-8e58-4584fbe4f882,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1710876252556783132,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:11.346902549Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-zlz9l,Uid:5fd420b7-5377-4b53-b5c3-4e785436bd9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876252542356795,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen
: 2024-03-19T19:24:11.336103952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6a496ada-aaf7-47a5-bd5d-5d909ef5df10,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876251652437545,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\
"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-19T19:24:11.345369829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&PodSandboxMetadata{Name:kindnet-d8pkw,Uid:566eb397-5ea5-4bc5-af28-3c5e9a12346b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876249553899782,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annota
tions:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:07.723707223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&PodSandboxMetadata{Name:kube-proxy-qd8kk,Uid:5c7dcc06-c11b-4173-9b5b-49aef039c7ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876249545980993,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:07.716109830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&PodSandboxMetadata{Name:etcd-ha-218762,Uid:f50238912ac80f884e60452838997ec3,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1710876228976314733,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.200:2379,kubernetes.io/config.hash: f50238912ac80f884e60452838997ec3,kubernetes.io/config.seen: 2024-03-19T19:23:48.465088987Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-218762,Uid:5f7614111d98075e40b8f2e738a2e9cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876228958283652,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5f7614111d98075e40b8f2e738a2e9cf,kubernetes.io/config.seen: 2024-03-19T19:23:48.465082560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-218762,Uid:5a8b2f8fb53080a4dfc07522f9bab3e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876228952446236,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{kubernetes.io/config.hash: 5a8b2f8fb53080a4dfc07522f9bab3e7,kubernetes.io/config.seen: 2024-03-19T19:23:48.465088214Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&P
odSandboxMetadata{Name:kube-apiserver-ha-218762,Uid:3a5b9205182474b16bf57e1daaaef85f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876228946444211,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.200:8443,kubernetes.io/config.hash: 3a5b9205182474b16bf57e1daaaef85f,kubernetes.io/config.seen: 2024-03-19T19:23:48.465090168Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-218762,Uid:5f302ea3b128447ba623d807f71536e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710876228934657484,Labels:map[string]string{component: kube-scheduler,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5f302ea3b128447ba623d807f71536e6,kubernetes.io/config.seen: 2024-03-19T19:23:48.465087026Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=42c9f706-187e-49f2-a496-64e6309ac289 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.135943965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb8b9ffb-b548-4d3a-9b5d-e1183012a959 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.136135451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb8b9ffb-b548-4d3a-9b5d-e1183012a959 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.136718002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb8b9ffb-b548-4d3a-9b5d-e1183012a959 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.170258721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff1d943c-3855-4f28-acc7-6a2071eb8949 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.170328662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff1d943c-3855-4f28-acc7-6a2071eb8949 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.171662817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb2e3e7a-9c19-4ba7-a693-c207daaad3d4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.172489367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876630172464743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb2e3e7a-9c19-4ba7-a693-c207daaad3d4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.173417355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d74c3f01-2de9-4810-af76-c35b3d251c91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.173549860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d74c3f01-2de9-4810-af76-c35b3d251c91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.173879136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d74c3f01-2de9-4810-af76-c35b3d251c91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.215268497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a5a793e-0e21-4e2b-9739-c9eb36155f07 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.215336581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a5a793e-0e21-4e2b-9739-c9eb36155f07 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.216710324Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccbbbd91-b5f8-4180-a3a6-5f5afc01793c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.217415013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876630217387334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccbbbd91-b5f8-4180-a3a6-5f5afc01793c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.218227381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e109b85b-c8d6-4765-86c6-d49a0751e47a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.218277253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e109b85b-c8d6-4765-86c6-d49a0751e47a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:30:30 ha-218762 crio[681]: time="2024-03-19 19:30:30.218841208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e109b85b-c8d6-4765-86c6-d49a0751e47a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d5224aff0311       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   03d5a8bf10dee       busybox-7fdf7869d9-d8xsk
	109c2437b7712       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   42b1b389a8129       coredns-76f75df574-6f64w
	4c1e36efc888a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   9e44b306f2e4f       coredns-76f75df574-zlz9l
	49e04c50e3c86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   fcb5bf156cf82       storage-provisioner
	ee8377d7b6d9a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   656b34459ad37       kindnet-d8pkw
	ab7b5d52d6006       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      6 minutes ago       Running             kube-proxy                0                   c02a60ba78138       kube-proxy-qd8kk
	da2851243bc4c       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   b395ee7355871       kube-vip-ha-218762
	dc37df9447020       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   59a484b792912       etcd-ha-218762
	136b31ae3d992       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Running             kube-controller-manager   0                   32f987658f099       kube-controller-manager-ha-218762
	82c2c39ac3bd9       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Running             kube-apiserver            0                   c9b47f6ddfd26       kube-apiserver-ha-218762
	b8f592d52269d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            0                   ffe45f05ed53a       kube-scheduler-ha-218762
	
	
	==> coredns [109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57] <==
	[INFO] 10.244.1.2:58529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229048s
	[INFO] 10.244.1.2:43335 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000190217s
	[INFO] 10.244.1.2:52240 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000587827s
	[INFO] 10.244.2.2:40073 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000116489s
	[INFO] 10.244.2.2:56969 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001486663s
	[INFO] 10.244.0.4:33585 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003760519s
	[INFO] 10.244.0.4:59082 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137291s
	[INFO] 10.244.0.4:40935 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118623s
	[INFO] 10.244.0.4:47943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107248s
	[INFO] 10.244.0.4:59058 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076766s
	[INFO] 10.244.1.2:50311 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001848487s
	[INFO] 10.244.1.2:43198 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174765s
	[INFO] 10.244.1.2:52346 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001415553s
	[INFO] 10.244.1.2:43441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076976s
	[INFO] 10.244.1.2:34726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138048s
	[INFO] 10.244.1.2:45607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112925s
	[INFO] 10.244.2.2:40744 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001749217s
	[INFO] 10.244.2.2:53029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111621s
	[INFO] 10.244.2.2:40938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014131s
	[INFO] 10.244.2.2:56391 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130828s
	[INFO] 10.244.1.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015755s
	[INFO] 10.244.2.2:42534 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120056s
	[INFO] 10.244.2.2:54358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316425s
	[INFO] 10.244.0.4:60417 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238089s
	[INFO] 10.244.0.4:60483 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144782s
	
	
	==> coredns [4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3] <==
	[INFO] 10.244.1.2:50371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146692s
	[INFO] 10.244.1.2:40281 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179601s
	[INFO] 10.244.2.2:51591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262048s
	[INFO] 10.244.2.2:40024 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001651832s
	[INFO] 10.244.2.2:45470 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153125s
	[INFO] 10.244.2.2:44372 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161391s
	[INFO] 10.244.0.4:55323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007536s
	[INFO] 10.244.0.4:36522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010122s
	[INFO] 10.244.0.4:59910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068387s
	[INFO] 10.244.0.4:56467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053097s
	[INFO] 10.244.1.2:47288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107648s
	[INFO] 10.244.1.2:47476 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075973s
	[INFO] 10.244.1.2:33459 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186954s
	[INFO] 10.244.2.2:42752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177891s
	[INFO] 10.244.2.2:55553 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189177s
	[INFO] 10.244.0.4:39711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067897s
	[INFO] 10.244.0.4:46192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.002995771s
	[INFO] 10.244.1.2:52462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000332016s
	[INFO] 10.244.1.2:33081 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215617s
	[INFO] 10.244.1.2:48821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092021s
	[INFO] 10.244.1.2:39937 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000452168s
	[INFO] 10.244.2.2:43887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122925s
	[INFO] 10.244.2.2:38523 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093183s
	[INFO] 10.244.2.2:56286 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149396s
	[INFO] 10.244.2.2:33782 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081737s
	
	
	==> describe nodes <==
	Name:               ha-218762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:30:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-218762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee6305e340734ffab00fb0013188dc6a
	  System UUID:                ee6305e3-4073-4ffa-b00f-b0013188dc6a
	  Boot ID:                    4a3c9f80-1526-4057-9e0e-fd3e10e41bd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d8xsk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 coredns-76f75df574-6f64w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m22s
	  kube-system                 coredns-76f75df574-zlz9l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m22s
	  kube-system                 etcd-ha-218762                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 kindnet-d8pkw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-ha-218762             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-controller-manager-ha-218762    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-proxy-qd8kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-ha-218762             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-vip-ha-218762                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m42s (x7 over 6m42s)  kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m42s)  kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m42s)  kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s                  kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s                  kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s                  kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal  NodeReady                6m19s                  kubelet          Node ha-218762 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	
	
	Name:               ha-218762-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:25:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:28:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-218762-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21ee6ca9760341f0b88147e7d26bc5a4
	  System UUID:                21ee6ca9-7603-41f0-b881-47e7d26bc5a4
	  Boot ID:                    d29cfd35-9738-4ec3-bdfa-fd53b9a80f75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ds2kh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-218762-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kindnet-4b7jg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-ha-218762-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-ha-218762-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-9q4nx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-ha-218762-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-vip-ha-218762-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 4m56s               kube-proxy       
	  Normal  NodeAllocatableEnforced  5m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 5m)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 5m)  kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 5m)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m58s               node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           4m42s               node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           3m31s               node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  NodeNotReady             103s                node-controller  Node ha-218762-m02 status is now: NodeNotReady
	
	
	Name:               ha-218762-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_26_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:26:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:30:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-218762-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc67d42b66264826a0e5dce81a989b48
	  System UUID:                cc67d42b-6626-4826-a0e5-dce81a989b48
	  Boot ID:                    f8b7dffa-c338-4457-9479-0c1c4ffa0bcd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-qrc54                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-218762-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-wv72v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-apiserver-ha-218762-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ha-218762-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-lq48k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-scheduler-ha-218762-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-vip-ha-218762-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node ha-218762-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	
	
	Name:               ha-218762-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_27_38_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:30:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-218762-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3252307468a44b83a5ab5199d03a0035
	  System UUID:                32523074-68a4-4b83-a5ab-5199d03a0035
	  Boot ID:                    e02289dd-a17d-490f-93ec-aa5804396da3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hslwj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-nth69    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x2 over 2m53s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x2 over 2m53s)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x2 over 2m53s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-218762-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar19 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052973] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.586107] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.313943] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.668535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.074231] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.062282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064060] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.205706] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.113821] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.284359] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.977018] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.063791] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.785726] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.566086] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.304560] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.098669] kauditd_printk_skb: 51 callbacks suppressed
	[Mar19 19:24] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 19:25] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [dc37df944702003608d704925db1515b753c461128e874e10764393af312326c] <==
	{"level":"warn","ts":"2024-03-19T19:30:30.564777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.588656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.592226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.606606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.607963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.610171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.617235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.621134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.625236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.633307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.641522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.648726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.654624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.658456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.666525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.673574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.682561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.688877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.689538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.701544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.703092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.710603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.723239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:30:30.7883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:30:30 up 7 min,  0 users,  load average: 0.33, 0.46, 0.25
	Linux ha-218762 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986] <==
	I0319 19:29:51.483318       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:30:01.493574       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:30:01.493644       1 main.go:227] handling current node
	I0319 19:30:01.493665       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:30:01.493674       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:30:01.494032       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:30:01.494050       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:30:01.494140       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:30:01.494149       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:30:11.510757       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:30:11.511002       1 main.go:227] handling current node
	I0319 19:30:11.511035       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:30:11.511089       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:30:11.511256       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:30:11.511299       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:30:11.511394       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:30:11.511404       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:30:21.526935       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:30:21.527015       1 main.go:227] handling current node
	I0319 19:30:21.527040       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:30:21.527058       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:30:21.527198       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:30:21.527217       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:30:21.527280       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:30:21.527299       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab] <==
	I0319 19:23:52.142480       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 19:23:52.143249       1 aggregator.go:165] initial CRD sync complete...
	I0319 19:23:52.143376       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 19:23:52.143403       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 19:23:52.143502       1 cache.go:39] Caches are synced for autoregister controller
	I0319 19:23:52.149116       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 19:23:52.936452       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0319 19:23:52.949762       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0319 19:23:52.949910       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 19:23:53.815468       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 19:23:53.863166       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 19:23:54.037144       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0319 19:23:54.044003       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200]
	I0319 19:23:54.045187       1 controller.go:624] quota admission added evaluator for: endpoints
	I0319 19:23:54.050161       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 19:23:54.081084       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 19:23:55.968360       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 19:23:55.986155       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0319 19:23:55.999202       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 19:24:07.687029       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0319 19:24:07.937286       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0319 19:27:40.030974       1 trace.go:236] Trace[1019634669]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0823910c-571a-4af2-9bc5-1a655210a684,client:192.168.39.161,api-group:,api-version:v1,name:kindnet-zwcq2,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-zwcq2,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:DELETE (19-Mar-2024 19:27:39.415) (total time: 615ms):
	Trace[1019634669]: ---"Object deleted from database" 543ms (19:27:40.030)
	Trace[1019634669]: [615.019116ms] [615.019116ms] END
	W0319 19:28:14.060350       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.200]
	
	
	==> kube-controller-manager [136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda] <==
	E0319 19:27:37.120296       1 certificate_controller.go:146] Sync csr-2kj9c failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2kj9c": the object has been modified; please apply your changes to the latest version and try again
	E0319 19:27:37.139424       1 certificate_controller.go:146] Sync csr-2kj9c failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2kj9c": the object has been modified; please apply your changes to the latest version and try again
	I0319 19:27:37.411270       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-218762-m04\" does not exist"
	I0319 19:27:37.459117       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nth69"
	I0319 19:27:37.470398       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l9pt2"
	I0319 19:27:37.474116       1 range_allocator.go:380] "Set node PodCIDR" node="ha-218762-m04" podCIDRs=["10.244.3.0/24"]
	I0319 19:27:37.587207       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-dnc6g"
	I0319 19:27:37.633970       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-l9pt2"
	I0319 19:27:37.707458       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-pq49n"
	I0319 19:27:37.707524       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-zwcq2"
	I0319 19:27:42.215692       1 event.go:376] "Event occurred" object="ha-218762-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller"
	I0319 19:27:42.234751       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-218762-m04"
	I0319 19:27:48.274225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-218762-m04"
	I0319 19:28:47.262273       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-218762-m04"
	I0319 19:28:47.263661       1 event.go:376] "Event occurred" object="ha-218762-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-218762-m02 status is now: NodeNotReady"
	I0319 19:28:47.290060       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.304247       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.318491       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ds2kh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.332874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.282269ms"
	I0319 19:28:47.333972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="93.39µs"
	I0319 19:28:47.340575       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-9q4nx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.366715       1 event.go:376] "Event occurred" object="kube-system/kindnet-4b7jg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.389968       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.415656       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.443169       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6] <==
	I0319 19:24:09.933910       1 server_others.go:72] "Using iptables proxy"
	I0319 19:24:09.951054       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0319 19:24:10.000172       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:24:10.000241       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:24:10.000268       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:24:10.004117       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:24:10.004313       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:24:10.004496       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:24:10.006774       1 config.go:188] "Starting service config controller"
	I0319 19:24:10.007178       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:24:10.007235       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:24:10.007254       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:24:10.009090       1 config.go:315] "Starting node config controller"
	I0319 19:24:10.009130       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:24:10.107878       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 19:24:10.107985       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:24:10.109254       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a] <==
	W0319 19:23:53.290339       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:23:53.290396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:23:53.292438       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 19:23:53.292503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 19:23:53.301423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 19:23:53.301472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:23:53.342779       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 19:23:53.342921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 19:23:53.348707       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 19:23:53.348928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 19:23:53.366723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:23:53.366845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:23:53.460916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 19:23:53.460994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 19:23:53.500052       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 19:23:53.500112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 19:23:53.570185       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:23:53.570249       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 19:23:55.700059       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0319 19:27:37.506188       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l9pt2\": pod kindnet-l9pt2 is already assigned to node \"ha-218762-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l9pt2" node="ha-218762-m04"
	E0319 19:27:37.506418       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l9pt2\": pod kindnet-l9pt2 is already assigned to node \"ha-218762-m04\"" pod="kube-system/kindnet-l9pt2"
	E0319 19:27:37.546108       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dnc6g\": pod kube-proxy-dnc6g is already assigned to node \"ha-218762-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dnc6g" node="ha-218762-m04"
	E0319 19:27:37.546222       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 9aab85ee-ad94-4703-864a-11c1720eb35f(kube-system/kube-proxy-dnc6g) wasn't assumed so cannot be forgotten"
	E0319 19:27:37.546306       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dnc6g\": pod kube-proxy-dnc6g is already assigned to node \"ha-218762-m04\"" pod="kube-system/kube-proxy-dnc6g"
	I0319 19:27:37.546391       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dnc6g" node="ha-218762-m04"
	
	
	==> kubelet <==
	Mar 19 19:25:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:25:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:25:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:26:56 ha-218762 kubelet[1386]: E0319 19:26:56.169360    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:26:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:26:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:26:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:26:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:26:59 ha-218762 kubelet[1386]: I0319 19:26:59.828973    1386 topology_manager.go:215] "Topology Admit Handler" podUID="6f5b6f71-8881-4429-a25f-ca62fef2f65c" podNamespace="default" podName="busybox-7fdf7869d9-d8xsk"
	Mar 19 19:26:59 ha-218762 kubelet[1386]: I0319 19:26:59.885065    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689s2\" (UniqueName: \"kubernetes.io/projected/6f5b6f71-8881-4429-a25f-ca62fef2f65c-kube-api-access-689s2\") pod \"busybox-7fdf7869d9-d8xsk\" (UID: \"6f5b6f71-8881-4429-a25f-ca62fef2f65c\") " pod="default/busybox-7fdf7869d9-d8xsk"
	Mar 19 19:27:56 ha-218762 kubelet[1386]: E0319 19:27:56.166920    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:27:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:27:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:27:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:27:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:28:56 ha-218762 kubelet[1386]: E0319 19:28:56.171252    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:28:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:28:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:28:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:28:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:29:56 ha-218762 kubelet[1386]: E0319 19:29:56.168413    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:29:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:29:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:29:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:29:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-218762 -n ha-218762
helpers_test.go:261: (dbg) Run:  kubectl --context ha-218762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (3.193792506s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:30:35.352032   31768 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:30:35.352156   31768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:35.352165   31768 out.go:304] Setting ErrFile to fd 2...
	I0319 19:30:35.352172   31768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:35.352386   31768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:30:35.352550   31768 out.go:298] Setting JSON to false
	I0319 19:30:35.352575   31768 mustload.go:65] Loading cluster: ha-218762
	I0319 19:30:35.352622   31768 notify.go:220] Checking for updates...
	I0319 19:30:35.353082   31768 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:30:35.353102   31768 status.go:255] checking status of ha-218762 ...
	I0319 19:30:35.353539   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:35.353595   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:35.370428   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0319 19:30:35.370861   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:35.371458   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:35.371486   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:35.371907   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:35.372148   31768 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:30:35.373856   31768 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:30:35.373881   31768 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:35.374169   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:35.374211   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:35.389011   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I0319 19:30:35.389492   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:35.389971   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:35.389992   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:35.390265   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:35.390460   31768 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:30:35.393152   31768 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:35.393597   31768 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:35.393629   31768 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:35.393789   31768 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:35.394105   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:35.394159   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:35.408911   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0319 19:30:35.409379   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:35.409868   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:35.409896   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:35.410209   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:35.410413   31768 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:30:35.410601   31768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:35.410647   31768 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:30:35.413680   31768 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:35.414169   31768 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:35.414201   31768 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:35.414356   31768 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:30:35.414493   31768 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:30:35.414602   31768 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:30:35.414721   31768 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:30:35.499124   31768 ssh_runner.go:195] Run: systemctl --version
	I0319 19:30:35.507750   31768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:35.526097   31768 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:35.526127   31768 api_server.go:166] Checking apiserver status ...
	I0319 19:30:35.526164   31768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:35.544253   31768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:30:35.555193   31768 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:35.555236   31768 ssh_runner.go:195] Run: ls
	I0319 19:30:35.562685   31768 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:35.567112   31768 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:35.567132   31768 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:30:35.567150   31768 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:35.567167   31768 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:30:35.567519   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:35.567552   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:35.582217   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0319 19:30:35.582589   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:35.583176   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:35.583201   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:35.583504   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:35.583685   31768 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:30:35.585438   31768 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:30:35.585458   31768 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:35.585767   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:35.585806   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:35.600859   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43205
	I0319 19:30:35.601274   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:35.601752   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:35.601790   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:35.602082   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:35.602293   31768 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:30:35.605058   31768 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:35.605480   31768 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:35.605510   31768 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:35.605643   31768 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:35.606010   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:35.606048   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:35.620449   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0319 19:30:35.620784   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:35.621257   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:35.621278   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:35.621559   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:35.621757   31768 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:30:35.621934   31768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:35.621974   31768 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:30:35.624490   31768 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:35.624870   31768 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:35.624900   31768 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:35.625038   31768 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:30:35.625194   31768 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:30:35.625312   31768 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:30:35.625437   31768 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	W0319 19:30:38.132537   31768 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:38.132654   31768 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	E0319 19:30:38.132673   31768 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:38.132680   31768 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:30:38.132697   31768 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:38.132704   31768 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:30:38.133050   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:38.133107   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:38.148324   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0319 19:30:38.148771   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:38.149200   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:38.149229   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:38.149558   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:38.149729   31768 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:30:38.151099   31768 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:30:38.151116   31768 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:38.151395   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:38.151442   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:38.165498   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I0319 19:30:38.165873   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:38.166373   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:38.166399   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:38.166777   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:38.166957   31768 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:30:38.169560   31768 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:38.169948   31768 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:38.169972   31768 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:38.170129   31768 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:38.170407   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:38.170440   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:38.185057   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0319 19:30:38.185426   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:38.185794   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:38.185814   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:38.186069   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:38.186230   31768 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:30:38.186387   31768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:38.186402   31768 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:30:38.189113   31768 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:38.189520   31768 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:38.189549   31768 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:38.189641   31768 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:30:38.189797   31768 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:30:38.189944   31768 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:30:38.190105   31768 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:30:38.274208   31768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:38.291647   31768 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:38.291677   31768 api_server.go:166] Checking apiserver status ...
	I0319 19:30:38.291715   31768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:38.312617   31768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:30:38.326718   31768 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:38.326757   31768 ssh_runner.go:195] Run: ls
	I0319 19:30:38.332081   31768 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:38.338212   31768 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:38.338231   31768 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:30:38.338239   31768 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:38.338253   31768 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:30:38.338558   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:38.338595   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:38.353246   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0319 19:30:38.353636   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:38.354033   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:38.354052   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:38.354348   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:38.354512   31768 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:30:38.355886   31768 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:30:38.355901   31768 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:38.356167   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:38.356226   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:38.369916   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42871
	I0319 19:30:38.370252   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:38.370660   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:38.370679   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:38.370997   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:38.371167   31768 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:30:38.373988   31768 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:38.374345   31768 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:38.374388   31768 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:38.374488   31768 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:38.374744   31768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:38.374787   31768 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:38.388920   31768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0319 19:30:38.389299   31768 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:38.389792   31768 main.go:141] libmachine: Using API Version  1
	I0319 19:30:38.389812   31768 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:38.390155   31768 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:38.390305   31768 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:30:38.390503   31768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:38.390535   31768 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:30:38.393095   31768 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:38.393449   31768 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:38.393469   31768 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:38.393643   31768 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:30:38.393803   31768 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:30:38.393970   31768 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:30:38.394101   31768 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:30:38.476338   31768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:38.492408   31768 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (4.949193592s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:30:39.747076   31863 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:30:39.747187   31863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:39.747197   31863 out.go:304] Setting ErrFile to fd 2...
	I0319 19:30:39.747203   31863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:39.747416   31863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:30:39.747586   31863 out.go:298] Setting JSON to false
	I0319 19:30:39.747613   31863 mustload.go:65] Loading cluster: ha-218762
	I0319 19:30:39.747722   31863 notify.go:220] Checking for updates...
	I0319 19:30:39.748070   31863 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:30:39.748086   31863 status.go:255] checking status of ha-218762 ...
	I0319 19:30:39.748593   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:39.748671   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:39.766532   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0319 19:30:39.766913   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:39.767420   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:39.767459   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:39.767862   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:39.768024   31863 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:30:39.769594   31863 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:30:39.769614   31863 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:39.769959   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:39.769993   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:39.784574   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37569
	I0319 19:30:39.784933   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:39.785317   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:39.785344   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:39.785645   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:39.785801   31863 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:30:39.788237   31863 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:39.788683   31863 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:39.788713   31863 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:39.788824   31863 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:39.789124   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:39.789162   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:39.803476   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0319 19:30:39.803868   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:39.804343   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:39.804378   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:39.804716   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:39.804908   31863 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:30:39.805098   31863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:39.805128   31863 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:30:39.807667   31863 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:39.808050   31863 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:39.808075   31863 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:39.808216   31863 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:30:39.808407   31863 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:30:39.808599   31863 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:30:39.808733   31863 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:30:39.893029   31863 ssh_runner.go:195] Run: systemctl --version
	I0319 19:30:39.899434   31863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:39.916528   31863 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:39.916552   31863 api_server.go:166] Checking apiserver status ...
	I0319 19:30:39.916582   31863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:39.933165   31863 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:30:39.945622   31863 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:39.945663   31863 ssh_runner.go:195] Run: ls
	I0319 19:30:39.950715   31863 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:39.955052   31863 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:39.955069   31863 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:30:39.955078   31863 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:39.955100   31863 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:30:39.955365   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:39.955394   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:39.970298   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0319 19:30:39.970639   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:39.971056   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:39.971085   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:39.971371   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:39.971561   31863 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:30:39.973003   31863 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:30:39.973018   31863 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:39.973294   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:39.973324   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:39.986957   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0319 19:30:39.987280   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:39.987641   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:39.987659   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:39.987950   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:39.988127   31863 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:30:39.990466   31863 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:39.990912   31863 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:39.990941   31863 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:39.991093   31863 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:39.991385   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:39.991415   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:40.006956   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0319 19:30:40.007302   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:40.007696   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:40.007718   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:40.008002   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:40.008155   31863 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:30:40.008335   31863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:40.008359   31863 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:30:40.010743   31863 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:40.011163   31863 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:40.011187   31863 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:40.011290   31863 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:30:40.011434   31863 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:30:40.011569   31863 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:30:40.011699   31863 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	W0319 19:30:41.208531   31863 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:41.208576   31863 retry.go:31] will retry after 286.849442ms: dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:44.276530   31863 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:44.276629   31863 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	E0319 19:30:44.276656   31863 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:44.276663   31863 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:30:44.276684   31863 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:44.276692   31863 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:30:44.276978   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:44.277022   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:44.291415   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I0319 19:30:44.291865   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:44.292357   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:44.292380   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:44.292681   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:44.292857   31863 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:30:44.294334   31863 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:30:44.294349   31863 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:44.294647   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:44.294688   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:44.308580   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
	I0319 19:30:44.308999   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:44.309515   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:44.309539   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:44.309841   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:44.310019   31863 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:30:44.313021   31863 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:44.313423   31863 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:44.313449   31863 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:44.313625   31863 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:44.313934   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:44.313977   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:44.327893   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I0319 19:30:44.328279   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:44.328689   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:44.328717   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:44.329007   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:44.329196   31863 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:30:44.329357   31863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:44.329383   31863 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:30:44.332087   31863 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:44.332523   31863 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:44.332555   31863 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:44.332660   31863 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:30:44.332814   31863 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:30:44.332954   31863 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:30:44.333145   31863 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:30:44.421030   31863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:44.437824   31863 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:44.437851   31863 api_server.go:166] Checking apiserver status ...
	I0319 19:30:44.437891   31863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:44.453609   31863 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:30:44.465203   31863 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:44.465244   31863 ssh_runner.go:195] Run: ls
	I0319 19:30:44.470541   31863 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:44.477936   31863 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:44.477956   31863 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:30:44.477964   31863 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:44.477978   31863 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:30:44.478257   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:44.478290   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:44.494331   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0319 19:30:44.494808   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:44.495317   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:44.495337   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:44.495638   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:44.495848   31863 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:30:44.497614   31863 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:30:44.497632   31863 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:44.497911   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:44.497945   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:44.512606   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0319 19:30:44.513032   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:44.513485   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:44.513509   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:44.513853   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:44.514035   31863 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:30:44.516849   31863 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:44.517284   31863 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:44.517316   31863 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:44.517447   31863 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:44.517708   31863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:44.517744   31863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:44.533419   31863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0319 19:30:44.533833   31863 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:44.534376   31863 main.go:141] libmachine: Using API Version  1
	I0319 19:30:44.534400   31863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:44.534783   31863 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:44.534989   31863 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:30:44.535187   31863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:44.535209   31863 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:30:44.537764   31863 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:44.538232   31863 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:44.538268   31863 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:44.538412   31863 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:30:44.538565   31863 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:30:44.538713   31863 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:30:44.538874   31863 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:30:44.625120   31863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:44.642433   31863 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (5.09783097s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:30:46.091572   31959 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:30:46.091706   31959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:46.091716   31959 out.go:304] Setting ErrFile to fd 2...
	I0319 19:30:46.091721   31959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:46.091913   31959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:30:46.092073   31959 out.go:298] Setting JSON to false
	I0319 19:30:46.092098   31959 mustload.go:65] Loading cluster: ha-218762
	I0319 19:30:46.092202   31959 notify.go:220] Checking for updates...
	I0319 19:30:46.092492   31959 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:30:46.092509   31959 status.go:255] checking status of ha-218762 ...
	I0319 19:30:46.092998   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:46.093053   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:46.107448   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0319 19:30:46.107862   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:46.108491   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:46.108518   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:46.108933   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:46.109132   31959 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:30:46.110934   31959 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:30:46.110948   31959 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:46.111303   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:46.111343   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:46.126475   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0319 19:30:46.126801   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:46.127199   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:46.127230   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:46.127526   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:46.127716   31959 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:30:46.130145   31959 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:46.130505   31959 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:46.130544   31959 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:46.130630   31959 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:46.130908   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:46.130940   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:46.145535   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0319 19:30:46.145870   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:46.146313   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:46.146335   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:46.146680   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:46.146864   31959 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:30:46.147036   31959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:46.147060   31959 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:30:46.149617   31959 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:46.150015   31959 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:46.150040   31959 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:46.150163   31959 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:30:46.150328   31959 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:30:46.150487   31959 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:30:46.150616   31959 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:30:46.236553   31959 ssh_runner.go:195] Run: systemctl --version
	I0319 19:30:46.243673   31959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:46.261353   31959 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:46.261401   31959 api_server.go:166] Checking apiserver status ...
	I0319 19:30:46.261495   31959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:46.280538   31959 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:30:46.292917   31959 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:46.292985   31959 ssh_runner.go:195] Run: ls
	I0319 19:30:46.298234   31959 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:46.302648   31959 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:46.302668   31959 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:30:46.302679   31959 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:46.302697   31959 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:30:46.303060   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:46.303099   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:46.317973   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I0319 19:30:46.318307   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:46.318754   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:46.318774   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:46.319148   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:46.319322   31959 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:30:46.320891   31959 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:30:46.320906   31959 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:46.321169   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:46.321197   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:46.336106   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I0319 19:30:46.336530   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:46.336948   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:46.336968   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:46.337297   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:46.337459   31959 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:30:46.340254   31959 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:46.340691   31959 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:46.340726   31959 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:46.340881   31959 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:46.341188   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:46.341227   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:46.355216   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I0319 19:30:46.355596   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:46.355991   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:46.356008   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:46.356339   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:46.356506   31959 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:30:46.356674   31959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:46.356694   31959 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:30:46.359309   31959 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:46.359724   31959 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:46.359749   31959 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:46.359897   31959 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:30:46.360053   31959 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:30:46.360215   31959 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:30:46.360389   31959 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	W0319 19:30:47.348466   31959 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:47.348529   31959 retry.go:31] will retry after 365.518102ms: dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:50.772572   31959 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:50.772654   31959 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	E0319 19:30:50.772694   31959 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:50.772706   31959 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:30:50.772734   31959 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:50.772749   31959 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:30:50.773045   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:50.773101   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:50.787979   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0319 19:30:50.788418   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:50.788846   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:50.788865   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:50.789201   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:50.789385   31959 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:30:50.790925   31959 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:30:50.790939   31959 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:50.791227   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:50.791267   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:50.805690   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38779
	I0319 19:30:50.806094   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:50.806585   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:50.806618   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:50.806960   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:50.807153   31959 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:30:50.809674   31959 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:50.810100   31959 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:50.810126   31959 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:50.810288   31959 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:50.810665   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:50.810709   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:50.824834   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0319 19:30:50.825277   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:50.825750   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:50.825782   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:50.826139   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:50.826321   31959 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:30:50.826514   31959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:50.826535   31959 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:30:50.829111   31959 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:50.829537   31959 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:50.829565   31959 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:50.829725   31959 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:30:50.829870   31959 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:30:50.830017   31959 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:30:50.830180   31959 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:30:50.915622   31959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:50.936921   31959 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:50.936950   31959 api_server.go:166] Checking apiserver status ...
	I0319 19:30:50.936984   31959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:50.951922   31959 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:30:50.962339   31959 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:50.962373   31959 ssh_runner.go:195] Run: ls
	I0319 19:30:50.967999   31959 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:50.972492   31959 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:50.972510   31959 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:30:50.972517   31959 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:50.972531   31959 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:30:50.972839   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:50.972877   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:50.987201   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0319 19:30:50.987602   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:50.988023   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:50.988045   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:50.988374   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:50.988534   31959 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:30:50.990139   31959 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:30:50.990153   31959 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:50.990500   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:50.990543   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:51.005370   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0319 19:30:51.005775   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:51.006207   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:51.006230   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:51.006597   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:51.006791   31959 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:30:51.009641   31959 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:51.010042   31959 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:51.010076   31959 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:51.010221   31959 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:51.010619   31959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:51.010671   31959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:51.027089   31959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0319 19:30:51.027515   31959 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:51.028016   31959 main.go:141] libmachine: Using API Version  1
	I0319 19:30:51.028036   31959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:51.028324   31959 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:51.028504   31959 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:30:51.028683   31959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:51.028702   31959 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:30:51.031267   31959 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:51.031685   31959 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:51.031714   31959 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:51.031863   31959 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:30:51.032047   31959 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:30:51.032238   31959 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:30:51.032380   31959 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:30:51.117183   31959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:51.131880   31959 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (3.761767045s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:30:54.442035   32066 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:30:54.442286   32066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:54.442295   32066 out.go:304] Setting ErrFile to fd 2...
	I0319 19:30:54.442300   32066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:30:54.442511   32066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:30:54.442666   32066 out.go:298] Setting JSON to false
	I0319 19:30:54.442689   32066 mustload.go:65] Loading cluster: ha-218762
	I0319 19:30:54.442813   32066 notify.go:220] Checking for updates...
	I0319 19:30:54.443215   32066 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:30:54.443236   32066 status.go:255] checking status of ha-218762 ...
	I0319 19:30:54.443705   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:54.443783   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:54.459320   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0319 19:30:54.459686   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:54.460294   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:54.460320   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:54.460765   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:54.460963   32066 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:30:54.462625   32066 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:30:54.462639   32066 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:54.462913   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:54.462950   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:54.478431   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0319 19:30:54.478786   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:54.479189   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:54.479211   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:54.479520   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:54.479673   32066 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:30:54.481943   32066 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:54.482385   32066 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:54.482415   32066 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:54.482534   32066 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:30:54.482912   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:54.482953   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:54.496814   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0319 19:30:54.497199   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:54.497598   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:54.497622   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:54.497915   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:54.498114   32066 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:30:54.498292   32066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:54.498318   32066 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:30:54.500709   32066 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:54.501119   32066 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:30:54.501133   32066 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:30:54.501250   32066 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:30:54.501417   32066 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:30:54.501564   32066 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:30:54.501728   32066 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:30:54.593237   32066 ssh_runner.go:195] Run: systemctl --version
	I0319 19:30:54.599639   32066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:54.615313   32066 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:54.615339   32066 api_server.go:166] Checking apiserver status ...
	I0319 19:30:54.615372   32066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:54.631461   32066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:30:54.651213   32066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:54.651290   32066 ssh_runner.go:195] Run: ls
	I0319 19:30:54.656899   32066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:54.667679   32066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:54.667698   32066 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:30:54.667708   32066 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:54.667723   32066 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:30:54.668001   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:54.668045   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:54.682403   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0319 19:30:54.682857   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:54.683373   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:54.683398   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:54.683678   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:54.683870   32066 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:30:54.685529   32066 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:30:54.685544   32066 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:54.685832   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:54.685881   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:54.700291   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0319 19:30:54.700655   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:54.701102   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:54.701150   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:54.701432   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:54.701629   32066 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:30:54.704518   32066 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:54.704965   32066 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:54.704992   32066 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:54.705138   32066 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:30:54.705431   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:54.705467   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:54.719991   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0319 19:30:54.720437   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:54.720853   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:54.720877   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:54.721186   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:54.721379   32066 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:30:54.721560   32066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:54.721578   32066 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:30:54.724215   32066 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:54.724714   32066 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:30:54.724750   32066 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:30:54.724878   32066 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:30:54.725041   32066 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:30:54.725182   32066 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:30:54.725330   32066 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	W0319 19:30:57.780472   32066 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:30:57.780565   32066 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	E0319 19:30:57.780584   32066 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:57.780601   32066 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:30:57.780626   32066 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:30:57.780642   32066 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:30:57.780971   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:57.781019   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:57.795298   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0319 19:30:57.795684   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:57.796171   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:57.796193   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:57.796546   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:57.796728   32066 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:30:57.798321   32066 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:30:57.798336   32066 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:57.798656   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:57.798697   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:57.814353   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I0319 19:30:57.814736   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:57.815151   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:57.815173   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:57.815479   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:57.815659   32066 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:30:57.818354   32066 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:57.818793   32066 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:57.818814   32066 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:57.818955   32066 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:30:57.819242   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:57.819274   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:57.833382   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0319 19:30:57.833856   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:57.834282   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:57.834306   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:57.834630   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:57.834793   32066 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:30:57.834989   32066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:57.835008   32066 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:30:57.837755   32066 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:57.838167   32066 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:30:57.838200   32066 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:30:57.838336   32066 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:30:57.838508   32066 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:30:57.838660   32066 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:30:57.838794   32066 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:30:57.925537   32066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:57.945736   32066 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:30:57.945764   32066 api_server.go:166] Checking apiserver status ...
	I0319 19:30:57.945801   32066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:30:57.961931   32066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:30:57.973566   32066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:30:57.973622   32066 ssh_runner.go:195] Run: ls
	I0319 19:30:57.978727   32066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:30:57.985335   32066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:30:57.985354   32066 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:30:57.985361   32066 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:30:57.985375   32066 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:30:57.985640   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:57.985677   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:58.001217   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0319 19:30:58.001658   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:58.002074   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:58.002092   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:58.002450   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:58.002625   32066 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:30:58.004079   32066 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:30:58.004095   32066 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:58.004434   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:58.004482   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:58.019566   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0319 19:30:58.019941   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:58.020485   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:58.020514   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:58.020808   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:58.020981   32066 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:30:58.023444   32066 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:58.023801   32066 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:58.023828   32066 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:58.023989   32066 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:30:58.024253   32066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:30:58.024308   32066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:30:58.037937   32066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0319 19:30:58.038325   32066 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:30:58.038767   32066 main.go:141] libmachine: Using API Version  1
	I0319 19:30:58.038788   32066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:30:58.039060   32066 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:30:58.039218   32066 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:30:58.039382   32066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:30:58.039404   32066 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:30:58.041919   32066 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:58.042327   32066 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:30:58.042351   32066 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:30:58.042473   32066 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:30:58.042615   32066 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:30:58.042767   32066 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:30:58.042909   32066 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:30:58.128911   32066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:30:58.143744   32066 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (3.745462472s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:31:02.894040   32161 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:31:02.894162   32161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:02.894173   32161 out.go:304] Setting ErrFile to fd 2...
	I0319 19:31:02.894180   32161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:02.894386   32161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:31:02.894560   32161 out.go:298] Setting JSON to false
	I0319 19:31:02.894594   32161 mustload.go:65] Loading cluster: ha-218762
	I0319 19:31:02.894726   32161 notify.go:220] Checking for updates...
	I0319 19:31:02.895088   32161 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:31:02.895108   32161 status.go:255] checking status of ha-218762 ...
	I0319 19:31:02.895499   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:02.895570   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:02.912037   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0319 19:31:02.912423   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:02.912956   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:02.912979   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:02.913445   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:02.913652   32161 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:31:02.915431   32161 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:31:02.915448   32161 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:02.915726   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:02.915764   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:02.930617   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34365
	I0319 19:31:02.930953   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:02.931379   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:02.931395   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:02.931681   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:02.931909   32161 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:31:02.934462   32161 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:02.934848   32161 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:02.934894   32161 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:02.935024   32161 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:02.935403   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:02.935444   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:02.949293   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0319 19:31:02.949785   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:02.950222   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:02.950246   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:02.950514   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:02.950710   32161 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:31:02.950883   32161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:02.950910   32161 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:31:02.953816   32161 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:02.954205   32161 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:02.954228   32161 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:02.954443   32161 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:31:02.954619   32161 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:31:02.954776   32161 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:31:02.954908   32161 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:31:03.041316   32161 ssh_runner.go:195] Run: systemctl --version
	I0319 19:31:03.048531   32161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:03.065899   32161 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:03.065923   32161 api_server.go:166] Checking apiserver status ...
	I0319 19:31:03.065967   32161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:03.084599   32161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:31:03.095492   32161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:03.095539   32161 ssh_runner.go:195] Run: ls
	I0319 19:31:03.100853   32161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:03.109178   32161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:03.109200   32161 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:31:03.109212   32161 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:03.109242   32161 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:31:03.109575   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:03.109625   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:03.124389   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34861
	I0319 19:31:03.124850   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:03.125303   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:03.125324   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:03.125596   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:03.125769   32161 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:31:03.127178   32161 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:31:03.127202   32161 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:31:03.127493   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:03.127523   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:03.141951   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I0319 19:31:03.142403   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:03.142859   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:03.142881   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:03.143168   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:03.143342   32161 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:31:03.145768   32161 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:31:03.146115   32161 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:31:03.146140   32161 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:31:03.146276   32161 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:31:03.146554   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:03.146584   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:03.160349   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0319 19:31:03.160740   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:03.161163   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:03.161181   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:03.161462   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:03.161641   32161 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:31:03.161843   32161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:03.161864   32161 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:31:03.164752   32161 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:31:03.165138   32161 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:31:03.165157   32161 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:31:03.165373   32161 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:31:03.165541   32161 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:31:03.165695   32161 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:31:03.165841   32161 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	W0319 19:31:06.228527   32161 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.234:22: connect: no route to host
	W0319 19:31:06.228618   32161 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	E0319 19:31:06.228642   32161 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:31:06.228656   32161 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:31:06.228681   32161 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	I0319 19:31:06.228693   32161 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:31:06.229103   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:06.229160   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:06.245180   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
	I0319 19:31:06.245584   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:06.246041   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:06.246061   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:06.246411   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:06.246608   32161 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:31:06.248377   32161 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:31:06.248396   32161 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:06.248674   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:06.248712   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:06.263805   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0319 19:31:06.264116   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:06.264575   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:06.264590   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:06.264836   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:06.265017   32161 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:31:06.267657   32161 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:06.268120   32161 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:06.268148   32161 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:06.268291   32161 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:06.268669   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:06.268712   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:06.282807   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0319 19:31:06.283211   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:06.283642   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:06.283665   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:06.284000   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:06.284191   32161 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:31:06.284390   32161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:06.284407   32161 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:31:06.286939   32161 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:06.287297   32161 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:06.287317   32161 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:06.287484   32161 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:31:06.287641   32161 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:31:06.287777   32161 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:31:06.287893   32161 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:31:06.368636   32161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:06.385353   32161 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:06.385383   32161 api_server.go:166] Checking apiserver status ...
	I0319 19:31:06.385427   32161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:06.401845   32161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:31:06.412147   32161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:06.412184   32161 ssh_runner.go:195] Run: ls
	I0319 19:31:06.417973   32161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:06.424798   32161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:06.424821   32161 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:31:06.424831   32161 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:06.424851   32161 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:31:06.425137   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:06.425179   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:06.440510   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0319 19:31:06.440910   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:06.441422   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:06.441447   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:06.441793   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:06.441992   32161 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:31:06.443794   32161 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:31:06.443810   32161 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:06.444195   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:06.444234   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:06.458764   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0319 19:31:06.459206   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:06.459680   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:06.459700   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:06.460004   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:06.460187   32161 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:31:06.462652   32161 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:06.463084   32161 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:06.463116   32161 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:06.463251   32161 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:06.463525   32161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:06.463555   32161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:06.478969   32161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0319 19:31:06.479445   32161 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:06.479914   32161 main.go:141] libmachine: Using API Version  1
	I0319 19:31:06.479934   32161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:06.480254   32161 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:06.480426   32161 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:31:06.480617   32161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:06.480637   32161 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:31:06.483212   32161 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:06.483637   32161 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:06.483659   32161 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:06.483821   32161 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:31:06.483992   32161 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:31:06.484124   32161 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:31:06.484250   32161 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:31:06.568620   32161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:06.584229   32161 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 7 (664.979168ms)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Stopping
	kubelet: Stopping
	apiserver: Stopping
	kubeconfig: Stopping
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:31:11.075857   32267 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:31:11.075959   32267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:11.075970   32267 out.go:304] Setting ErrFile to fd 2...
	I0319 19:31:11.075974   32267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:11.076155   32267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:31:11.076317   32267 out.go:298] Setting JSON to false
	I0319 19:31:11.076341   32267 mustload.go:65] Loading cluster: ha-218762
	I0319 19:31:11.076473   32267 notify.go:220] Checking for updates...
	I0319 19:31:11.076695   32267 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:31:11.076708   32267 status.go:255] checking status of ha-218762 ...
	I0319 19:31:11.077081   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.077139   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.091711   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
	I0319 19:31:11.092066   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.092745   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.092790   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.093122   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.093357   32267 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:31:11.094885   32267 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:31:11.094902   32267 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:11.095170   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.095205   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.109358   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0319 19:31:11.109680   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.110163   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.110183   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.110500   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.110675   32267 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:31:11.113307   32267 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:11.113738   32267 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:11.113771   32267 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:11.114002   32267 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:11.114367   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.114404   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.128323   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I0319 19:31:11.128742   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.129208   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.129228   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.129574   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.129759   32267 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:31:11.129979   32267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:11.130002   32267 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:31:11.132825   32267 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:11.133240   32267 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:11.133263   32267 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:11.133398   32267 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:31:11.133580   32267 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:31:11.133736   32267 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:31:11.133910   32267 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:31:11.222489   32267 ssh_runner.go:195] Run: systemctl --version
	I0319 19:31:11.230638   32267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:11.246662   32267 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:11.246686   32267 api_server.go:166] Checking apiserver status ...
	I0319 19:31:11.246718   32267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:11.263703   32267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:31:11.275730   32267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:11.275781   32267 ssh_runner.go:195] Run: ls
	I0319 19:31:11.281392   32267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:11.285638   32267 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:11.285656   32267 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:31:11.285665   32267 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:11.285679   32267 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:31:11.285951   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.285983   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.300606   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0319 19:31:11.301047   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.301552   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.301576   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.301893   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.302063   32267 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:31:11.303953   32267 status.go:330] ha-218762-m02 host status = "Stopping" (err=<nil>)
	I0319 19:31:11.303970   32267 status.go:343] host is not running, skipping remaining checks
	I0319 19:31:11.303977   32267 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Stopping Kubelet:Stopping APIServer:Stopping Kubeconfig:Stopping Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:11.303996   32267 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:31:11.304362   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.304403   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.319659   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
	I0319 19:31:11.320005   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.320447   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.320465   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.320777   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.320973   32267 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:31:11.322448   32267 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:31:11.322464   32267 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:11.322734   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.322771   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.336307   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32919
	I0319 19:31:11.336635   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.337176   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.337197   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.337702   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.337900   32267 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:31:11.340786   32267 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:11.341161   32267 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:11.341188   32267 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:11.341323   32267 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:11.341836   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.341892   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.355834   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0319 19:31:11.356155   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.356628   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.356653   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.357040   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.357266   32267 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:31:11.357459   32267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:11.357482   32267 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:31:11.360880   32267 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:11.361703   32267 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:11.361735   32267 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:11.364374   32267 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:31:11.364633   32267 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:31:11.364946   32267 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:31:11.365117   32267 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:31:11.450573   32267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:11.482256   32267 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:11.482284   32267 api_server.go:166] Checking apiserver status ...
	I0319 19:31:11.482320   32267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:11.499188   32267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:31:11.511021   32267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:11.511072   32267 ssh_runner.go:195] Run: ls
	I0319 19:31:11.517647   32267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:11.521800   32267 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:11.521829   32267 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:31:11.521839   32267 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:11.521870   32267 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:31:11.522206   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.522243   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.536584   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I0319 19:31:11.536953   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.537358   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.537376   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.537669   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.537840   32267 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:31:11.539229   32267 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:31:11.539241   32267 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:11.539516   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.539546   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.555680   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I0319 19:31:11.556115   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.556547   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.556584   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.556914   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.557126   32267 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:31:11.559910   32267 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:11.560287   32267 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:11.560318   32267 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:11.560450   32267 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:11.560703   32267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:11.560736   32267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:11.576527   32267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0319 19:31:11.576875   32267 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:11.577320   32267 main.go:141] libmachine: Using API Version  1
	I0319 19:31:11.577342   32267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:11.577628   32267 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:11.577799   32267 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:31:11.577992   32267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:11.578015   32267 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:31:11.580797   32267 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:11.581142   32267 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:11.581164   32267 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:11.581288   32267 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:31:11.581447   32267 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:31:11.581611   32267 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:31:11.581737   32267 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:31:11.665342   32267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:11.683506   32267 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 7 (646.929979ms)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:31:17.116486   32370 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:31:17.116583   32370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:17.116591   32370 out.go:304] Setting ErrFile to fd 2...
	I0319 19:31:17.116594   32370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:17.116810   32370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:31:17.116981   32370 out.go:298] Setting JSON to false
	I0319 19:31:17.117006   32370 mustload.go:65] Loading cluster: ha-218762
	I0319 19:31:17.117108   32370 notify.go:220] Checking for updates...
	I0319 19:31:17.117344   32370 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:31:17.117358   32370 status.go:255] checking status of ha-218762 ...
	I0319 19:31:17.117712   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.117769   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.135068   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I0319 19:31:17.135498   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.135970   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.135991   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.136438   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.136632   32370 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:31:17.138402   32370 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:31:17.138424   32370 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:17.138840   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.138901   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.153359   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34621
	I0319 19:31:17.153818   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.154366   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.154404   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.154736   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.154945   32370 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:31:17.158110   32370 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:17.158423   32370 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:17.158446   32370 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:17.158622   32370 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:17.158890   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.158921   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.173292   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34949
	I0319 19:31:17.173686   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.174119   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.174138   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.174461   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.174666   32370 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:31:17.174893   32370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:17.174922   32370 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:31:17.177460   32370 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:17.177907   32370 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:17.177942   32370 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:17.178030   32370 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:31:17.178199   32370 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:31:17.178314   32370 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:31:17.178468   32370 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:31:17.270704   32370 ssh_runner.go:195] Run: systemctl --version
	I0319 19:31:17.278360   32370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:17.295196   32370 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:17.295220   32370 api_server.go:166] Checking apiserver status ...
	I0319 19:31:17.295248   32370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:17.309583   32370 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:31:17.320197   32370 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:17.320252   32370 ssh_runner.go:195] Run: ls
	I0319 19:31:17.325245   32370 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:17.330553   32370 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:17.330575   32370 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:31:17.330588   32370 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:17.330609   32370 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:31:17.330996   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.331042   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.346261   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0319 19:31:17.346582   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.347039   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.347058   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.347345   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.347527   32370 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:31:17.348987   32370 status.go:330] ha-218762-m02 host status = "Stopped" (err=<nil>)
	I0319 19:31:17.349002   32370 status.go:343] host is not running, skipping remaining checks
	I0319 19:31:17.349010   32370 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:17.349030   32370 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:31:17.349436   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.349471   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.363949   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0319 19:31:17.364272   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.364705   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.364727   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.365029   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.365150   32370 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:31:17.366640   32370 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:31:17.366653   32370 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:17.366904   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.366961   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.381756   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0319 19:31:17.382112   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.382591   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.382615   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.382952   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.383134   32370 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:31:17.385930   32370 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:17.386310   32370 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:17.386336   32370 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:17.386483   32370 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:17.386763   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.386804   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.401055   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I0319 19:31:17.401479   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.401865   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.401879   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.402163   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.402330   32370 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:31:17.402496   32370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:17.402516   32370 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:31:17.405023   32370 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:17.405416   32370 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:17.405446   32370 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:17.405548   32370 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:31:17.405693   32370 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:31:17.405825   32370 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:31:17.405968   32370 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:31:17.490009   32370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:17.511053   32370 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:17.511081   32370 api_server.go:166] Checking apiserver status ...
	I0319 19:31:17.511113   32370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:17.527701   32370 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:31:17.537715   32370 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:17.537758   32370 ssh_runner.go:195] Run: ls
	I0319 19:31:17.542950   32370 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:17.547536   32370 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:17.547562   32370 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:31:17.547570   32370 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:17.547584   32370 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:31:17.547852   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.547948   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.562753   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0319 19:31:17.563154   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.563707   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.563731   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.564096   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.564304   32370 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:31:17.565843   32370 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:31:17.565860   32370 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:17.566145   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.566174   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.581072   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38119
	I0319 19:31:17.581385   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.581803   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.581821   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.582104   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.582260   32370 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:31:17.584919   32370 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:17.585293   32370 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:17.585321   32370 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:17.585472   32370 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:17.585835   32370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:17.585883   32370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:17.600000   32370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33867
	I0319 19:31:17.600318   32370 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:17.600740   32370 main.go:141] libmachine: Using API Version  1
	I0319 19:31:17.600758   32370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:17.601086   32370 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:17.601260   32370 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:31:17.601482   32370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:17.601506   32370 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:31:17.604139   32370 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:17.604628   32370 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:17.604652   32370 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:17.604802   32370 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:31:17.604975   32370 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:31:17.605125   32370 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:31:17.605263   32370 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:31:17.692981   32370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:17.708569   32370 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 7 (643.92505ms)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-218762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:31:26.038952   32465 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:31:26.039547   32465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:26.039608   32465 out.go:304] Setting ErrFile to fd 2...
	I0319 19:31:26.039627   32465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:26.040084   32465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:31:26.040696   32465 out.go:298] Setting JSON to false
	I0319 19:31:26.040729   32465 mustload.go:65] Loading cluster: ha-218762
	I0319 19:31:26.040851   32465 notify.go:220] Checking for updates...
	I0319 19:31:26.041148   32465 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:31:26.041164   32465 status.go:255] checking status of ha-218762 ...
	I0319 19:31:26.041555   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.041616   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.057532   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I0319 19:31:26.058044   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.058703   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.058746   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.059122   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.059289   32465 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:31:26.061036   32465 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:31:26.061049   32465 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:26.061368   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.061431   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.075642   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I0319 19:31:26.076022   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.076519   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.076541   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.076899   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.077144   32465 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:31:26.080352   32465 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:26.080821   32465 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:26.080862   32465 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:26.080999   32465 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:31:26.081269   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.081304   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.095208   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34607
	I0319 19:31:26.095606   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.096027   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.096044   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.096389   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.096550   32465 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:31:26.096745   32465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:26.096771   32465 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:31:26.099172   32465 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:26.099574   32465 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:31:26.099611   32465 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:31:26.099663   32465 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:31:26.099835   32465 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:31:26.100032   32465 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:31:26.100175   32465 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:31:26.189218   32465 ssh_runner.go:195] Run: systemctl --version
	I0319 19:31:26.196158   32465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:26.218835   32465 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:26.218860   32465 api_server.go:166] Checking apiserver status ...
	I0319 19:31:26.218895   32465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:26.235266   32465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0319 19:31:26.245853   32465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:26.245899   32465 ssh_runner.go:195] Run: ls
	I0319 19:31:26.251100   32465 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:26.255710   32465 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:26.255731   32465 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:31:26.255739   32465 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:26.255752   32465 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:31:26.256073   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.256109   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.271867   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
	I0319 19:31:26.272340   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.272791   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.272808   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.273129   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.273301   32465 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:31:26.274659   32465 status.go:330] ha-218762-m02 host status = "Stopped" (err=<nil>)
	I0319 19:31:26.274670   32465 status.go:343] host is not running, skipping remaining checks
	I0319 19:31:26.274675   32465 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:26.274689   32465 status.go:255] checking status of ha-218762-m03 ...
	I0319 19:31:26.275052   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.275098   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.290646   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0319 19:31:26.291024   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.291464   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.291487   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.291750   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.291926   32465 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:31:26.293313   32465 status.go:330] ha-218762-m03 host status = "Running" (err=<nil>)
	I0319 19:31:26.293327   32465 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:26.293593   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.293645   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.309044   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43583
	I0319 19:31:26.309387   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.309803   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.309819   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.310120   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.310324   32465 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:31:26.312829   32465 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:26.313224   32465 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:26.313245   32465 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:26.313372   32465 host.go:66] Checking if "ha-218762-m03" exists ...
	I0319 19:31:26.313658   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.313689   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.327561   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45561
	I0319 19:31:26.327923   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.328394   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.328420   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.328698   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.328873   32465 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:31:26.329054   32465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:26.329084   32465 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:31:26.331430   32465 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:26.331767   32465 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:26.331805   32465 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:26.331892   32465 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:31:26.332056   32465 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:31:26.332210   32465 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:31:26.332395   32465 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:31:26.413775   32465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:26.429285   32465 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:31:26.429309   32465 api_server.go:166] Checking apiserver status ...
	I0319 19:31:26.429346   32465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:31:26.446226   32465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0319 19:31:26.456188   32465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:31:26.456226   32465 ssh_runner.go:195] Run: ls
	I0319 19:31:26.461380   32465 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:31:26.465893   32465 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:31:26.465910   32465 status.go:422] ha-218762-m03 apiserver status = Running (err=<nil>)
	I0319 19:31:26.465918   32465 status.go:257] ha-218762-m03 status: &{Name:ha-218762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:31:26.465935   32465 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:31:26.466277   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.466321   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.480528   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I0319 19:31:26.480951   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.481421   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.481444   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.481773   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.481945   32465 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:31:26.483229   32465 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:31:26.483245   32465 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:26.483537   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.483569   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.497757   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0319 19:31:26.498118   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.498540   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.498562   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.498855   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.499035   32465 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:31:26.501971   32465 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:26.502399   32465 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:26.502423   32465 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:26.502586   32465 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:31:26.502931   32465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:26.502978   32465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:26.520216   32465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0319 19:31:26.520628   32465 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:26.521123   32465 main.go:141] libmachine: Using API Version  1
	I0319 19:31:26.521143   32465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:26.521441   32465 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:26.521621   32465 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:31:26.521814   32465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:31:26.521841   32465 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:31:26.524559   32465 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:26.524969   32465 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:26.524990   32465 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:26.525139   32465 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:31:26.525315   32465 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:31:26.525465   32465 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:31:26.525585   32465 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:31:26.612993   32465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:31:26.629112   32465 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-218762 -n ha-218762
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-218762 logs -n 25: (1.5287966s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m03_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m04 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp testdata/cp-test.txt                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m04_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03:/home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m03 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-218762 node stop m02 -v=7                                                     | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-218762 node start m02 -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:23:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:23:13.578354   27348 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:23:13.578457   27348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:23:13.578468   27348 out.go:304] Setting ErrFile to fd 2...
	I0319 19:23:13.578472   27348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:23:13.578647   27348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:23:13.579240   27348 out.go:298] Setting JSON to false
	I0319 19:23:13.580101   27348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3892,"bootTime":1710872302,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:23:13.580155   27348 start.go:139] virtualization: kvm guest
	I0319 19:23:13.582378   27348 out.go:177] * [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:23:13.583824   27348 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:23:13.583830   27348 notify.go:220] Checking for updates...
	I0319 19:23:13.585154   27348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:23:13.586458   27348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:23:13.587615   27348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:23:13.588969   27348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:23:13.590067   27348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:23:13.591295   27348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:23:13.624196   27348 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 19:23:13.625498   27348 start.go:297] selected driver: kvm2
	I0319 19:23:13.625510   27348 start.go:901] validating driver "kvm2" against <nil>
	I0319 19:23:13.625520   27348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:23:13.626162   27348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:23:13.626226   27348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:23:13.640062   27348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:23:13.640098   27348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 19:23:13.640328   27348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:23:13.640399   27348 cni.go:84] Creating CNI manager for ""
	I0319 19:23:13.640422   27348 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0319 19:23:13.640432   27348 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0319 19:23:13.640507   27348 start.go:340] cluster config:
	{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0319 19:23:13.640644   27348 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:23:13.642511   27348 out.go:177] * Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	I0319 19:23:13.643758   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:23:13.643785   27348 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:23:13.643790   27348 cache.go:56] Caching tarball of preloaded images
	I0319 19:23:13.643870   27348 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:23:13.643884   27348 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:23:13.644148   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:23:13.644166   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json: {Name:mka9a0c31e052f0341976073e8a572d7e1505326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:13.644303   27348 start.go:360] acquireMachinesLock for ha-218762: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:23:13.644337   27348 start.go:364] duration metric: took 19.537µs to acquireMachinesLock for "ha-218762"
	I0319 19:23:13.644354   27348 start.go:93] Provisioning new machine with config: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:23:13.644414   27348 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 19:23:13.645899   27348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 19:23:13.646009   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:23:13.646048   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:23:13.659854   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45163
	I0319 19:23:13.660295   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:23:13.660824   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:23:13.660846   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:23:13.661137   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:23:13.661286   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:13.661439   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:13.661551   27348 start.go:159] libmachine.API.Create for "ha-218762" (driver="kvm2")
	I0319 19:23:13.661577   27348 client.go:168] LocalClient.Create starting
	I0319 19:23:13.661606   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:23:13.661643   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:23:13.661656   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:23:13.661706   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:23:13.661723   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:23:13.661734   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:23:13.661758   27348 main.go:141] libmachine: Running pre-create checks...
	I0319 19:23:13.661771   27348 main.go:141] libmachine: (ha-218762) Calling .PreCreateCheck
	I0319 19:23:13.662091   27348 main.go:141] libmachine: (ha-218762) Calling .GetConfigRaw
	I0319 19:23:13.662411   27348 main.go:141] libmachine: Creating machine...
	I0319 19:23:13.662423   27348 main.go:141] libmachine: (ha-218762) Calling .Create
	I0319 19:23:13.662532   27348 main.go:141] libmachine: (ha-218762) Creating KVM machine...
	I0319 19:23:13.663650   27348 main.go:141] libmachine: (ha-218762) DBG | found existing default KVM network
	I0319 19:23:13.664218   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:13.664103   27371 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0319 19:23:13.664252   27348 main.go:141] libmachine: (ha-218762) DBG | created network xml: 
	I0319 19:23:13.664303   27348 main.go:141] libmachine: (ha-218762) DBG | <network>
	I0319 19:23:13.664318   27348 main.go:141] libmachine: (ha-218762) DBG |   <name>mk-ha-218762</name>
	I0319 19:23:13.664328   27348 main.go:141] libmachine: (ha-218762) DBG |   <dns enable='no'/>
	I0319 19:23:13.664339   27348 main.go:141] libmachine: (ha-218762) DBG |   
	I0319 19:23:13.664350   27348 main.go:141] libmachine: (ha-218762) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0319 19:23:13.664364   27348 main.go:141] libmachine: (ha-218762) DBG |     <dhcp>
	I0319 19:23:13.664374   27348 main.go:141] libmachine: (ha-218762) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0319 19:23:13.664387   27348 main.go:141] libmachine: (ha-218762) DBG |     </dhcp>
	I0319 19:23:13.664400   27348 main.go:141] libmachine: (ha-218762) DBG |   </ip>
	I0319 19:23:13.664422   27348 main.go:141] libmachine: (ha-218762) DBG |   
	I0319 19:23:13.664446   27348 main.go:141] libmachine: (ha-218762) DBG | </network>
	I0319 19:23:13.664460   27348 main.go:141] libmachine: (ha-218762) DBG | 
	I0319 19:23:13.669054   27348 main.go:141] libmachine: (ha-218762) DBG | trying to create private KVM network mk-ha-218762 192.168.39.0/24...
	I0319 19:23:13.731711   27348 main.go:141] libmachine: (ha-218762) DBG | private KVM network mk-ha-218762 192.168.39.0/24 created
	I0319 19:23:13.731745   27348 main.go:141] libmachine: (ha-218762) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762 ...
	I0319 19:23:13.731776   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:13.731677   27371 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:23:13.731794   27348 main.go:141] libmachine: (ha-218762) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:23:13.731822   27348 main.go:141] libmachine: (ha-218762) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:23:13.954575   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:13.954473   27371 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa...
	I0319 19:23:14.183743   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:14.183531   27371 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/ha-218762.rawdisk...
	I0319 19:23:14.183827   27348 main.go:141] libmachine: (ha-218762) DBG | Writing magic tar header
	I0319 19:23:14.183846   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762 (perms=drwx------)
	I0319 19:23:14.183894   27348 main.go:141] libmachine: (ha-218762) DBG | Writing SSH key tar header
	I0319 19:23:14.183926   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:14.183675   27371 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762 ...
	I0319 19:23:14.183940   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:23:14.183953   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:23:14.183964   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762
	I0319 19:23:14.183975   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:23:14.183987   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:23:14.183997   27348 main.go:141] libmachine: (ha-218762) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:23:14.184011   27348 main.go:141] libmachine: (ha-218762) Creating domain...
	I0319 19:23:14.184027   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:23:14.184039   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:23:14.184060   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:23:14.184080   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:23:14.184092   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:23:14.184103   27348 main.go:141] libmachine: (ha-218762) DBG | Checking permissions on dir: /home
	I0319 19:23:14.184113   27348 main.go:141] libmachine: (ha-218762) DBG | Skipping /home - not owner
	I0319 19:23:14.185044   27348 main.go:141] libmachine: (ha-218762) define libvirt domain using xml: 
	I0319 19:23:14.185068   27348 main.go:141] libmachine: (ha-218762) <domain type='kvm'>
	I0319 19:23:14.185079   27348 main.go:141] libmachine: (ha-218762)   <name>ha-218762</name>
	I0319 19:23:14.185086   27348 main.go:141] libmachine: (ha-218762)   <memory unit='MiB'>2200</memory>
	I0319 19:23:14.185109   27348 main.go:141] libmachine: (ha-218762)   <vcpu>2</vcpu>
	I0319 19:23:14.185121   27348 main.go:141] libmachine: (ha-218762)   <features>
	I0319 19:23:14.185127   27348 main.go:141] libmachine: (ha-218762)     <acpi/>
	I0319 19:23:14.185140   27348 main.go:141] libmachine: (ha-218762)     <apic/>
	I0319 19:23:14.185152   27348 main.go:141] libmachine: (ha-218762)     <pae/>
	I0319 19:23:14.185165   27348 main.go:141] libmachine: (ha-218762)     
	I0319 19:23:14.185175   27348 main.go:141] libmachine: (ha-218762)   </features>
	I0319 19:23:14.185202   27348 main.go:141] libmachine: (ha-218762)   <cpu mode='host-passthrough'>
	I0319 19:23:14.185215   27348 main.go:141] libmachine: (ha-218762)   
	I0319 19:23:14.185221   27348 main.go:141] libmachine: (ha-218762)   </cpu>
	I0319 19:23:14.185225   27348 main.go:141] libmachine: (ha-218762)   <os>
	I0319 19:23:14.185233   27348 main.go:141] libmachine: (ha-218762)     <type>hvm</type>
	I0319 19:23:14.185237   27348 main.go:141] libmachine: (ha-218762)     <boot dev='cdrom'/>
	I0319 19:23:14.185242   27348 main.go:141] libmachine: (ha-218762)     <boot dev='hd'/>
	I0319 19:23:14.185250   27348 main.go:141] libmachine: (ha-218762)     <bootmenu enable='no'/>
	I0319 19:23:14.185254   27348 main.go:141] libmachine: (ha-218762)   </os>
	I0319 19:23:14.185259   27348 main.go:141] libmachine: (ha-218762)   <devices>
	I0319 19:23:14.185265   27348 main.go:141] libmachine: (ha-218762)     <disk type='file' device='cdrom'>
	I0319 19:23:14.185271   27348 main.go:141] libmachine: (ha-218762)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/boot2docker.iso'/>
	I0319 19:23:14.185276   27348 main.go:141] libmachine: (ha-218762)       <target dev='hdc' bus='scsi'/>
	I0319 19:23:14.185283   27348 main.go:141] libmachine: (ha-218762)       <readonly/>
	I0319 19:23:14.185288   27348 main.go:141] libmachine: (ha-218762)     </disk>
	I0319 19:23:14.185292   27348 main.go:141] libmachine: (ha-218762)     <disk type='file' device='disk'>
	I0319 19:23:14.185305   27348 main.go:141] libmachine: (ha-218762)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:23:14.185311   27348 main.go:141] libmachine: (ha-218762)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/ha-218762.rawdisk'/>
	I0319 19:23:14.185316   27348 main.go:141] libmachine: (ha-218762)       <target dev='hda' bus='virtio'/>
	I0319 19:23:14.185321   27348 main.go:141] libmachine: (ha-218762)     </disk>
	I0319 19:23:14.185326   27348 main.go:141] libmachine: (ha-218762)     <interface type='network'>
	I0319 19:23:14.185330   27348 main.go:141] libmachine: (ha-218762)       <source network='mk-ha-218762'/>
	I0319 19:23:14.185338   27348 main.go:141] libmachine: (ha-218762)       <model type='virtio'/>
	I0319 19:23:14.185342   27348 main.go:141] libmachine: (ha-218762)     </interface>
	I0319 19:23:14.185347   27348 main.go:141] libmachine: (ha-218762)     <interface type='network'>
	I0319 19:23:14.185351   27348 main.go:141] libmachine: (ha-218762)       <source network='default'/>
	I0319 19:23:14.185361   27348 main.go:141] libmachine: (ha-218762)       <model type='virtio'/>
	I0319 19:23:14.185366   27348 main.go:141] libmachine: (ha-218762)     </interface>
	I0319 19:23:14.185370   27348 main.go:141] libmachine: (ha-218762)     <serial type='pty'>
	I0319 19:23:14.185374   27348 main.go:141] libmachine: (ha-218762)       <target port='0'/>
	I0319 19:23:14.185379   27348 main.go:141] libmachine: (ha-218762)     </serial>
	I0319 19:23:14.185383   27348 main.go:141] libmachine: (ha-218762)     <console type='pty'>
	I0319 19:23:14.185388   27348 main.go:141] libmachine: (ha-218762)       <target type='serial' port='0'/>
	I0319 19:23:14.185402   27348 main.go:141] libmachine: (ha-218762)     </console>
	I0319 19:23:14.185424   27348 main.go:141] libmachine: (ha-218762)     <rng model='virtio'>
	I0319 19:23:14.185465   27348 main.go:141] libmachine: (ha-218762)       <backend model='random'>/dev/random</backend>
	I0319 19:23:14.185478   27348 main.go:141] libmachine: (ha-218762)     </rng>
	I0319 19:23:14.185487   27348 main.go:141] libmachine: (ha-218762)     
	I0319 19:23:14.185493   27348 main.go:141] libmachine: (ha-218762)     
	I0319 19:23:14.185503   27348 main.go:141] libmachine: (ha-218762)   </devices>
	I0319 19:23:14.185511   27348 main.go:141] libmachine: (ha-218762) </domain>
	I0319 19:23:14.185522   27348 main.go:141] libmachine: (ha-218762) 
	I0319 19:23:14.189592   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:f0:c3:61 in network default
	I0319 19:23:14.190109   27348 main.go:141] libmachine: (ha-218762) Ensuring networks are active...
	I0319 19:23:14.190128   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:14.190681   27348 main.go:141] libmachine: (ha-218762) Ensuring network default is active
	I0319 19:23:14.190947   27348 main.go:141] libmachine: (ha-218762) Ensuring network mk-ha-218762 is active
	I0319 19:23:14.191375   27348 main.go:141] libmachine: (ha-218762) Getting domain xml...
	I0319 19:23:14.192006   27348 main.go:141] libmachine: (ha-218762) Creating domain...
	I0319 19:23:15.345251   27348 main.go:141] libmachine: (ha-218762) Waiting to get IP...
	I0319 19:23:15.345976   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:15.346360   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:15.346403   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:15.346356   27371 retry.go:31] will retry after 309.498905ms: waiting for machine to come up
	I0319 19:23:15.657770   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:15.658192   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:15.658216   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:15.658149   27371 retry.go:31] will retry after 276.733838ms: waiting for machine to come up
	I0319 19:23:15.936591   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:15.937000   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:15.937030   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:15.936953   27371 retry.go:31] will retry after 358.761144ms: waiting for machine to come up
	I0319 19:23:16.297370   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:16.297822   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:16.297845   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:16.297775   27371 retry.go:31] will retry after 555.023033ms: waiting for machine to come up
	I0319 19:23:16.854501   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:16.854954   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:16.854985   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:16.854900   27371 retry.go:31] will retry after 485.696214ms: waiting for machine to come up
	I0319 19:23:17.342321   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:17.342821   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:17.342848   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:17.342788   27371 retry.go:31] will retry after 799.596882ms: waiting for machine to come up
	I0319 19:23:18.143605   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:18.144020   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:18.144053   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:18.143980   27371 retry.go:31] will retry after 779.78661ms: waiting for machine to come up
	I0319 19:23:18.925208   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:18.925603   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:18.925632   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:18.925542   27371 retry.go:31] will retry after 1.214561373s: waiting for machine to come up
	I0319 19:23:20.141785   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:20.142140   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:20.142160   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:20.142111   27371 retry.go:31] will retry after 1.178568266s: waiting for machine to come up
	I0319 19:23:21.321878   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:21.322139   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:21.322166   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:21.322104   27371 retry.go:31] will retry after 1.566328576s: waiting for machine to come up
	I0319 19:23:22.889584   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:22.890005   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:22.890057   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:22.889947   27371 retry.go:31] will retry after 1.840325389s: waiting for machine to come up
	I0319 19:23:24.731419   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:24.731835   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:24.731863   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:24.731800   27371 retry.go:31] will retry after 3.175644061s: waiting for machine to come up
	I0319 19:23:27.909404   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:27.909716   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:27.909739   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:27.909665   27371 retry.go:31] will retry after 3.654470598s: waiting for machine to come up
	I0319 19:23:31.567747   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:31.568069   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find current IP address of domain ha-218762 in network mk-ha-218762
	I0319 19:23:31.568089   27348 main.go:141] libmachine: (ha-218762) DBG | I0319 19:23:31.568041   27371 retry.go:31] will retry after 3.714075051s: waiting for machine to come up
	I0319 19:23:35.283120   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.283555   27348 main.go:141] libmachine: (ha-218762) Found IP for machine: 192.168.39.200
	I0319 19:23:35.283574   27348 main.go:141] libmachine: (ha-218762) Reserving static IP address...
	I0319 19:23:35.283583   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has current primary IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.283898   27348 main.go:141] libmachine: (ha-218762) DBG | unable to find host DHCP lease matching {name: "ha-218762", mac: "52:54:00:2b:ad:c2", ip: "192.168.39.200"} in network mk-ha-218762
	I0319 19:23:35.350942   27348 main.go:141] libmachine: (ha-218762) DBG | Getting to WaitForSSH function...
	I0319 19:23:35.350970   27348 main.go:141] libmachine: (ha-218762) Reserved static IP address: 192.168.39.200
	I0319 19:23:35.350982   27348 main.go:141] libmachine: (ha-218762) Waiting for SSH to be available...
	I0319 19:23:35.353361   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.353792   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.353814   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.353993   27348 main.go:141] libmachine: (ha-218762) DBG | Using SSH client type: external
	I0319 19:23:35.354015   27348 main.go:141] libmachine: (ha-218762) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa (-rw-------)
	I0319 19:23:35.354056   27348 main.go:141] libmachine: (ha-218762) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:23:35.354066   27348 main.go:141] libmachine: (ha-218762) DBG | About to run SSH command:
	I0319 19:23:35.354096   27348 main.go:141] libmachine: (ha-218762) DBG | exit 0
	I0319 19:23:35.480569   27348 main.go:141] libmachine: (ha-218762) DBG | SSH cmd err, output: <nil>: 
	I0319 19:23:35.480786   27348 main.go:141] libmachine: (ha-218762) KVM machine creation complete!
	I0319 19:23:35.481121   27348 main.go:141] libmachine: (ha-218762) Calling .GetConfigRaw
	I0319 19:23:35.481629   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:35.481808   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:35.481951   27348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:23:35.481966   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:23:35.483075   27348 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:23:35.483089   27348 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:23:35.483098   27348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:23:35.483105   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.485259   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.485608   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.485641   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.485697   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.485860   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.486014   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.486166   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.486335   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.486513   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.486527   27348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:23:35.595882   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:23:35.595908   27348 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:23:35.595929   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.598562   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.598951   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.598976   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.599134   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.599315   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.599449   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.599563   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.599758   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.599962   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.599978   27348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:23:35.709230   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:23:35.709293   27348 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:23:35.709308   27348 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:23:35.709318   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:35.709541   27348 buildroot.go:166] provisioning hostname "ha-218762"
	I0319 19:23:35.709568   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:35.709762   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.712302   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.712607   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.712635   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.712734   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.712899   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.713040   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.713195   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.713325   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.713552   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.713567   27348 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762 && echo "ha-218762" | sudo tee /etc/hostname
	I0319 19:23:35.835388   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:23:35.835411   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.838021   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.838452   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.838472   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.838641   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:35.838823   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.838988   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:35.839139   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:35.839313   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:35.839496   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:35.839524   27348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:23:35.959035   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:23:35.959065   27348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:23:35.959123   27348 buildroot.go:174] setting up certificates
	I0319 19:23:35.959146   27348 provision.go:84] configureAuth start
	I0319 19:23:35.959163   27348 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:23:35.959451   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:35.961875   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.962224   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.962250   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.962397   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:35.965311   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.965668   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:35.965692   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:35.965867   27348 provision.go:143] copyHostCerts
	I0319 19:23:35.965900   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:23:35.965941   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:23:35.965954   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:23:35.966021   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:23:35.966112   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:23:35.966137   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:23:35.966146   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:23:35.966186   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:23:35.966240   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:23:35.966259   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:23:35.966267   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:23:35.966301   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:23:35.966357   27348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762 san=[127.0.0.1 192.168.39.200 ha-218762 localhost minikube]
	I0319 19:23:36.247556   27348 provision.go:177] copyRemoteCerts
	I0319 19:23:36.247606   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:23:36.247627   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.250153   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.250432   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.250451   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.250628   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.250787   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.250912   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.251054   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:36.334715   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:23:36.334794   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:23:36.361458   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:23:36.361528   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0319 19:23:36.387710   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:23:36.387765   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 19:23:36.413310   27348 provision.go:87] duration metric: took 454.152044ms to configureAuth
	I0319 19:23:36.413327   27348 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:23:36.413468   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:23:36.413529   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.416309   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.416650   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.416681   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.416830   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.416981   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.417140   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.417272   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.417443   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:36.417636   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:36.417652   27348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:23:36.697091   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:23:36.697123   27348 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:23:36.697139   27348 main.go:141] libmachine: (ha-218762) Calling .GetURL
	I0319 19:23:36.698601   27348 main.go:141] libmachine: (ha-218762) DBG | Using libvirt version 6000000
	I0319 19:23:36.700778   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.701118   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.701146   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.701320   27348 main.go:141] libmachine: Docker is up and running!
	I0319 19:23:36.701336   27348 main.go:141] libmachine: Reticulating splines...
	I0319 19:23:36.701342   27348 client.go:171] duration metric: took 23.039758114s to LocalClient.Create
	I0319 19:23:36.701361   27348 start.go:167] duration metric: took 23.039811148s to libmachine.API.Create "ha-218762"
	I0319 19:23:36.701370   27348 start.go:293] postStartSetup for "ha-218762" (driver="kvm2")
	I0319 19:23:36.701379   27348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:23:36.701393   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.701648   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:23:36.701675   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.703532   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.703828   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.703853   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.703974   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.704125   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.704296   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.704428   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:36.786952   27348 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:23:36.791724   27348 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:23:36.791745   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:23:36.791806   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:23:36.791910   27348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:23:36.791923   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:23:36.792043   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:23:36.801988   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:23:36.828000   27348 start.go:296] duration metric: took 126.618743ms for postStartSetup
	I0319 19:23:36.828039   27348 main.go:141] libmachine: (ha-218762) Calling .GetConfigRaw
	I0319 19:23:36.828557   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:36.830625   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.830958   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.830989   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.831153   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:23:36.831315   27348 start.go:128] duration metric: took 23.186893376s to createHost
	I0319 19:23:36.831335   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.833256   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.833565   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.833589   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.833711   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.833876   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.834027   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.834144   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.834321   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:23:36.834476   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:23:36.834495   27348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:23:36.945337   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876216.913641100
	
	I0319 19:23:36.945358   27348 fix.go:216] guest clock: 1710876216.913641100
	I0319 19:23:36.945373   27348 fix.go:229] Guest: 2024-03-19 19:23:36.9136411 +0000 UTC Remote: 2024-03-19 19:23:36.831326652 +0000 UTC m=+23.297982092 (delta=82.314448ms)
	I0319 19:23:36.945396   27348 fix.go:200] guest clock delta is within tolerance: 82.314448ms
	I0319 19:23:36.945403   27348 start.go:83] releasing machines lock for "ha-218762", held for 23.301056143s
	I0319 19:23:36.945423   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.945688   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:36.948216   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.948553   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.948588   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.948737   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.949237   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.949405   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:23:36.949503   27348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:23:36.949539   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.949626   27348 ssh_runner.go:195] Run: cat /version.json
	I0319 19:23:36.949648   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:23:36.951851   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952164   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952195   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.952230   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952335   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.952513   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.952671   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:36.952689   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:36.952693   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.952820   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:23:36.952836   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:36.952955   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:23:36.953116   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:23:36.953252   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:23:37.053228   27348 ssh_runner.go:195] Run: systemctl --version
	I0319 19:23:37.059509   27348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:23:37.227644   27348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:23:37.234733   27348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:23:37.234793   27348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:23:37.253674   27348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:23:37.253689   27348 start.go:494] detecting cgroup driver to use...
	I0319 19:23:37.253745   27348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:23:37.271225   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:23:37.287126   27348 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:23:37.287166   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:23:37.302316   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:23:37.317370   27348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:23:37.445354   27348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:23:37.614479   27348 docker.go:233] disabling docker service ...
	I0319 19:23:37.614536   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:23:37.630422   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:23:37.644393   27348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:23:37.770883   27348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:23:37.884881   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:23:37.900070   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:23:37.920353   27348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:23:37.920417   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.931523   27348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:23:37.931575   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.942549   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.953522   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.964552   27348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:23:37.976425   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:37.987542   27348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:38.008218   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:23:38.019579   27348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:23:38.029874   27348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:23:38.029919   27348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:23:38.047948   27348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:23:38.062702   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:23:38.173044   27348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:23:38.313608   27348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:23:38.313666   27348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:23:38.319057   27348 start.go:562] Will wait 60s for crictl version
	I0319 19:23:38.319105   27348 ssh_runner.go:195] Run: which crictl
	I0319 19:23:38.323217   27348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:23:38.360505   27348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:23:38.360605   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:23:38.390311   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:23:38.425010   27348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:23:38.426364   27348 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:23:38.428934   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:38.429286   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:23:38.429315   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:23:38.429518   27348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:23:38.434013   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:23:38.448099   27348 kubeadm.go:877] updating cluster {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:23:38.448203   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:23:38.448250   27348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:23:38.488018   27348 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 19:23:38.488086   27348 ssh_runner.go:195] Run: which lz4
	I0319 19:23:38.492522   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0319 19:23:38.492593   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 19:23:38.497145   27348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 19:23:38.497181   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 19:23:40.122811   27348 crio.go:462] duration metric: took 1.630235492s to copy over tarball
	I0319 19:23:40.122872   27348 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 19:23:42.749149   27348 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.626249337s)
	I0319 19:23:42.749175   27348 crio.go:469] duration metric: took 2.626342309s to extract the tarball
	I0319 19:23:42.749181   27348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 19:23:42.788753   27348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:23:42.838457   27348 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:23:42.838478   27348 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:23:42.838485   27348 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.29.3 crio true true} ...
	I0319 19:23:42.838575   27348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:23:42.838642   27348 ssh_runner.go:195] Run: crio config
	I0319 19:23:42.886617   27348 cni.go:84] Creating CNI manager for ""
	I0319 19:23:42.886637   27348 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0319 19:23:42.886648   27348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:23:42.886671   27348 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-218762 NodeName:ha-218762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:23:42.886785   27348 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-218762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:23:42.886807   27348 kube-vip.go:111] generating kube-vip config ...
	I0319 19:23:42.886844   27348 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:23:42.905208   27348 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:23:42.905340   27348 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:23:42.905394   27348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:23:42.917363   27348 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:23:42.917427   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0319 19:23:42.928684   27348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0319 19:23:42.947717   27348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:23:42.965642   27348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 19:23:42.983361   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0319 19:23:43.001617   27348 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:23:43.006169   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:23:43.020479   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:23:43.156133   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:23:43.174176   27348 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.200
	I0319 19:23:43.174200   27348 certs.go:194] generating shared ca certs ...
	I0319 19:23:43.174248   27348 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.174403   27348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:23:43.174455   27348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:23:43.174466   27348 certs.go:256] generating profile certs ...
	I0319 19:23:43.174531   27348 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:23:43.174549   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt with IP's: []
	I0319 19:23:43.392882   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt ...
	I0319 19:23:43.392911   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt: {Name:mka24831a144650fc12e99fb7602b05e3ab4357e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.393069   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key ...
	I0319 19:23:43.393080   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key: {Name:mk8697710c9481a12f7f2d4bccbf8fdb6ac58ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.393149   27348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea
	I0319 19:23:43.393164   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.254]
	I0319 19:23:43.497035   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea ...
	I0319 19:23:43.497065   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea: {Name:mk165b88fe7af465704e1426acd53551f0b36afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.497220   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea ...
	I0319 19:23:43.497232   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea: {Name:mkaf35a6353add309ad7e0286840c720e8749efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.497301   27348 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.c4bc05ea -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:23:43.497387   27348 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.c4bc05ea -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:23:43.497441   27348 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:23:43.497454   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt with IP's: []
	I0319 19:23:43.612821   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt ...
	I0319 19:23:43.612854   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt: {Name:mk5190a53f71d376643d1104d7bca70bdf2e0c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.613018   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key ...
	I0319 19:23:43.613030   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key: {Name:mk678608f54131146d1bb7d6f39b5961f53f5ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:23:43.613102   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:23:43.613121   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:23:43.613133   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:23:43.613149   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:23:43.613165   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:23:43.613183   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:23:43.613198   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:23:43.613213   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:23:43.613265   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:23:43.613307   27348 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:23:43.613319   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:23:43.613344   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:23:43.613368   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:23:43.613392   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:23:43.613440   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:23:43.613483   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.613508   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:43.613523   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:23:43.614065   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:23:43.649385   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:23:43.676987   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:23:43.705282   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:23:43.733582   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 19:23:43.763301   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 19:23:43.790974   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:23:43.818659   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:23:43.845762   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:23:43.872698   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:23:43.899478   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:23:43.925954   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:23:43.944608   27348 ssh_runner.go:195] Run: openssl version
	I0319 19:23:43.951180   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:23:43.966462   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.971605   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.971646   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:23:43.978480   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:23:43.992088   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:23:44.009985   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:44.015497   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:44.015556   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:23:44.024170   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:23:44.037957   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:23:44.058338   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:23:44.063411   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:23:44.063460   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:23:44.069749   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:23:44.081479   27348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:23:44.086535   27348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:23:44.086585   27348 kubeadm.go:391] StartCluster: {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:23:44.086654   27348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:23:44.086706   27348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:23:44.126804   27348 cri.go:89] found id: ""
	I0319 19:23:44.126869   27348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 19:23:44.137706   27348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 19:23:44.147876   27348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 19:23:44.157979   27348 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 19:23:44.157995   27348 kubeadm.go:156] found existing configuration files:
	
	I0319 19:23:44.158029   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 19:23:44.167338   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 19:23:44.167391   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 19:23:44.176915   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 19:23:44.186816   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 19:23:44.186875   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 19:23:44.196663   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 19:23:44.206057   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 19:23:44.206107   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 19:23:44.216565   27348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 19:23:44.226717   27348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 19:23:44.226785   27348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 19:23:44.237255   27348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 19:23:44.478822   27348 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 19:23:56.108471   27348 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 19:23:56.108541   27348 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 19:23:56.108634   27348 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 19:23:56.108761   27348 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 19:23:56.108891   27348 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 19:23:56.108973   27348 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 19:23:56.110617   27348 out.go:204]   - Generating certificates and keys ...
	I0319 19:23:56.110716   27348 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 19:23:56.110803   27348 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 19:23:56.110902   27348 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 19:23:56.110989   27348 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 19:23:56.111074   27348 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 19:23:56.111137   27348 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 19:23:56.111210   27348 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 19:23:56.111359   27348 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-218762 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0319 19:23:56.111431   27348 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 19:23:56.111573   27348 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-218762 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0319 19:23:56.111656   27348 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 19:23:56.111740   27348 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 19:23:56.111811   27348 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 19:23:56.111888   27348 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 19:23:56.111956   27348 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 19:23:56.112037   27348 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 19:23:56.112111   27348 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 19:23:56.112231   27348 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 19:23:56.112344   27348 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 19:23:56.112478   27348 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 19:23:56.112574   27348 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 19:23:56.114865   27348 out.go:204]   - Booting up control plane ...
	I0319 19:23:56.114981   27348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 19:23:56.115089   27348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 19:23:56.115153   27348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 19:23:56.115252   27348 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 19:23:56.115398   27348 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 19:23:56.115458   27348 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 19:23:56.115652   27348 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 19:23:56.115752   27348 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.579346 seconds
	I0319 19:23:56.115877   27348 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 19:23:56.116037   27348 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 19:23:56.116085   27348 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 19:23:56.116304   27348 kubeadm.go:309] [mark-control-plane] Marking the node ha-218762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 19:23:56.116356   27348 kubeadm.go:309] [bootstrap-token] Using token: jgwb7g.gi5mwlrvqlxl7rgc
	I0319 19:23:56.117594   27348 out.go:204]   - Configuring RBAC rules ...
	I0319 19:23:56.117673   27348 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 19:23:56.117738   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 19:23:56.117853   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 19:23:56.118002   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 19:23:56.118157   27348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 19:23:56.118228   27348 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 19:23:56.118314   27348 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 19:23:56.118374   27348 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 19:23:56.118436   27348 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 19:23:56.118455   27348 kubeadm.go:309] 
	I0319 19:23:56.118542   27348 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 19:23:56.118556   27348 kubeadm.go:309] 
	I0319 19:23:56.118684   27348 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 19:23:56.118703   27348 kubeadm.go:309] 
	I0319 19:23:56.118757   27348 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 19:23:56.118843   27348 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 19:23:56.118912   27348 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 19:23:56.118923   27348 kubeadm.go:309] 
	I0319 19:23:56.118977   27348 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 19:23:56.118983   27348 kubeadm.go:309] 
	I0319 19:23:56.119042   27348 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 19:23:56.119054   27348 kubeadm.go:309] 
	I0319 19:23:56.119129   27348 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 19:23:56.119227   27348 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 19:23:56.119318   27348 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 19:23:56.119327   27348 kubeadm.go:309] 
	I0319 19:23:56.119432   27348 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 19:23:56.119493   27348 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 19:23:56.119500   27348 kubeadm.go:309] 
	I0319 19:23:56.119561   27348 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jgwb7g.gi5mwlrvqlxl7rgc \
	I0319 19:23:56.119649   27348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 19:23:56.119668   27348 kubeadm.go:309] 	--control-plane 
	I0319 19:23:56.119671   27348 kubeadm.go:309] 
	I0319 19:23:56.119734   27348 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 19:23:56.119740   27348 kubeadm.go:309] 
	I0319 19:23:56.119804   27348 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jgwb7g.gi5mwlrvqlxl7rgc \
	I0319 19:23:56.119959   27348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 19:23:56.119979   27348 cni.go:84] Creating CNI manager for ""
	I0319 19:23:56.119987   27348 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0319 19:23:56.121697   27348 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0319 19:23:56.123144   27348 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0319 19:23:56.138925   27348 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0319 19:23:56.138951   27348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0319 19:23:56.168991   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0319 19:23:56.616133   27348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 19:23:56.616226   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:56.616249   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-218762 minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=ha-218762 minikube.k8s.io/primary=true
	I0319 19:23:56.641653   27348 ops.go:34] apiserver oom_adj: -16
	I0319 19:23:56.759415   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:57.260372   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:57.759932   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:58.259813   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:58.759613   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:59.259463   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:23:59.760053   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:00.259683   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:00.760461   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:01.259941   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:01.760134   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:02.260147   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:02.759579   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:03.259541   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:03.760078   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:04.259495   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:04.760078   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:05.259822   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:05.760357   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:06.259703   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:06.760335   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:07.259502   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:07.759633   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:08.260454   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 19:24:08.399653   27348 kubeadm.go:1107] duration metric: took 11.783495552s to wait for elevateKubeSystemPrivileges
	W0319 19:24:08.399689   27348 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 19:24:08.399698   27348 kubeadm.go:393] duration metric: took 24.313115746s to StartCluster
	I0319 19:24:08.399718   27348 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:08.399810   27348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:24:08.400404   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:08.400623   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 19:24:08.400636   27348 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 19:24:08.400691   27348 addons.go:69] Setting storage-provisioner=true in profile "ha-218762"
	I0319 19:24:08.400618   27348 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:24:08.400742   27348 start.go:240] waiting for startup goroutines ...
	I0319 19:24:08.400717   27348 addons.go:69] Setting default-storageclass=true in profile "ha-218762"
	I0319 19:24:08.400781   27348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-218762"
	I0319 19:24:08.400719   27348 addons.go:234] Setting addon storage-provisioner=true in "ha-218762"
	I0319 19:24:08.400895   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:24:08.400834   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:08.401197   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.401225   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.401277   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.401314   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.416354   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0319 19:24:08.416357   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0319 19:24:08.416840   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.416941   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.417371   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.417392   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.417461   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.417478   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.417697   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.417748   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.417935   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:08.418213   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.418242   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.420048   27348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:24:08.420312   27348 kapi.go:59] client config for ha-218762: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt", KeyFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key", CAFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0319 19:24:08.420727   27348 cert_rotation.go:137] Starting client certificate rotation controller
	I0319 19:24:08.420901   27348 addons.go:234] Setting addon default-storageclass=true in "ha-218762"
	I0319 19:24:08.420931   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:24:08.421207   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.421226   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.432480   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
	I0319 19:24:08.432904   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.433343   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.433361   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.433640   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.433846   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:08.435363   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:24:08.437784   27348 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 19:24:08.435580   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I0319 19:24:08.439290   27348 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:24:08.439307   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 19:24:08.439322   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:24:08.439682   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.440100   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.440119   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.440504   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.441057   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:08.441088   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:08.442176   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.442568   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:24:08.442590   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.442686   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:24:08.442838   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:24:08.442976   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:24:08.443089   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:24:08.455262   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0319 19:24:08.455551   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:08.455953   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:08.455975   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:08.456265   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:08.456441   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:08.457974   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:24:08.458173   27348 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 19:24:08.458185   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 19:24:08.458197   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:24:08.460497   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.460831   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:24:08.460847   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:08.461038   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:24:08.461202   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:24:08.461334   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:24:08.461485   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:24:08.496913   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 19:24:08.578292   27348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:24:08.602955   27348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 19:24:08.796234   27348 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0319 19:24:09.093831   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.093861   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.093912   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.093922   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.094169   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.094172   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094192   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.094200   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094181   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094286   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094301   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.094309   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.094210   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.094334   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.094603   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094616   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094646   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.094676   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.094684   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.094785   27348 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0319 19:24:09.094794   27348 round_trippers.go:469] Request Headers:
	I0319 19:24:09.094804   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:24:09.094809   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:24:09.104238   27348 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0319 19:24:09.104753   27348 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0319 19:24:09.104767   27348 round_trippers.go:469] Request Headers:
	I0319 19:24:09.104777   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:24:09.104786   27348 round_trippers.go:473]     Content-Type: application/json
	I0319 19:24:09.104793   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:24:09.107595   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:24:09.107708   27348 main.go:141] libmachine: Making call to close driver server
	I0319 19:24:09.107718   27348 main.go:141] libmachine: (ha-218762) Calling .Close
	I0319 19:24:09.107926   27348 main.go:141] libmachine: Successfully made call to close driver server
	I0319 19:24:09.107942   27348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 19:24:09.107959   27348 main.go:141] libmachine: (ha-218762) DBG | Closing plugin on server side
	I0319 19:24:09.109736   27348 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0319 19:24:09.110904   27348 addons.go:505] duration metric: took 710.266074ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0319 19:24:09.110939   27348 start.go:245] waiting for cluster config update ...
	I0319 19:24:09.110955   27348 start.go:254] writing updated cluster config ...
	I0319 19:24:09.112685   27348 out.go:177] 
	I0319 19:24:09.114089   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:09.114173   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:24:09.115868   27348 out.go:177] * Starting "ha-218762-m02" control-plane node in "ha-218762" cluster
	I0319 19:24:09.116878   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:24:09.116898   27348 cache.go:56] Caching tarball of preloaded images
	I0319 19:24:09.116979   27348 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:24:09.116992   27348 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:24:09.117068   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:24:09.117228   27348 start.go:360] acquireMachinesLock for ha-218762-m02: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:24:09.117276   27348 start.go:364] duration metric: took 30.229µs to acquireMachinesLock for "ha-218762-m02"
	I0319 19:24:09.117302   27348 start.go:93] Provisioning new machine with config: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:24:09.117388   27348 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0319 19:24:09.118566   27348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 19:24:09.118642   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:09.118669   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:09.132381   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0319 19:24:09.132753   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:09.133161   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:09.133176   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:09.133472   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:09.133684   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:09.133841   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:09.133986   27348 start.go:159] libmachine.API.Create for "ha-218762" (driver="kvm2")
	I0319 19:24:09.134009   27348 client.go:168] LocalClient.Create starting
	I0319 19:24:09.134038   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:24:09.134071   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:24:09.134088   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:24:09.134151   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:24:09.134179   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:24:09.134197   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:24:09.134219   27348 main.go:141] libmachine: Running pre-create checks...
	I0319 19:24:09.134231   27348 main.go:141] libmachine: (ha-218762-m02) Calling .PreCreateCheck
	I0319 19:24:09.134383   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetConfigRaw
	I0319 19:24:09.134739   27348 main.go:141] libmachine: Creating machine...
	I0319 19:24:09.134752   27348 main.go:141] libmachine: (ha-218762-m02) Calling .Create
	I0319 19:24:09.134874   27348 main.go:141] libmachine: (ha-218762-m02) Creating KVM machine...
	I0319 19:24:09.136007   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found existing default KVM network
	I0319 19:24:09.136161   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found existing private KVM network mk-ha-218762
	I0319 19:24:09.136364   27348 main.go:141] libmachine: (ha-218762-m02) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02 ...
	I0319 19:24:09.136390   27348 main.go:141] libmachine: (ha-218762-m02) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:24:09.136439   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.136339   27723 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:24:09.136514   27348 main.go:141] libmachine: (ha-218762-m02) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:24:09.352009   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.351882   27723 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa...
	I0319 19:24:09.449610   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.449508   27723 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/ha-218762-m02.rawdisk...
	I0319 19:24:09.449645   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Writing magic tar header
	I0319 19:24:09.449659   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Writing SSH key tar header
	I0319 19:24:09.449670   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:09.449615   27723 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02 ...
	I0319 19:24:09.449727   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02
	I0319 19:24:09.449758   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:24:09.449782   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:24:09.449801   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02 (perms=drwx------)
	I0319 19:24:09.449816   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:24:09.449831   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:24:09.449850   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:24:09.449869   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:24:09.449883   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:24:09.449906   27348 main.go:141] libmachine: (ha-218762-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:24:09.449917   27348 main.go:141] libmachine: (ha-218762-m02) Creating domain...
	I0319 19:24:09.449931   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:24:09.449944   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:24:09.449972   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Checking permissions on dir: /home
	I0319 19:24:09.449996   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Skipping /home - not owner
	I0319 19:24:09.450793   27348 main.go:141] libmachine: (ha-218762-m02) define libvirt domain using xml: 
	I0319 19:24:09.450815   27348 main.go:141] libmachine: (ha-218762-m02) <domain type='kvm'>
	I0319 19:24:09.450825   27348 main.go:141] libmachine: (ha-218762-m02)   <name>ha-218762-m02</name>
	I0319 19:24:09.450834   27348 main.go:141] libmachine: (ha-218762-m02)   <memory unit='MiB'>2200</memory>
	I0319 19:24:09.450846   27348 main.go:141] libmachine: (ha-218762-m02)   <vcpu>2</vcpu>
	I0319 19:24:09.450854   27348 main.go:141] libmachine: (ha-218762-m02)   <features>
	I0319 19:24:09.450860   27348 main.go:141] libmachine: (ha-218762-m02)     <acpi/>
	I0319 19:24:09.450867   27348 main.go:141] libmachine: (ha-218762-m02)     <apic/>
	I0319 19:24:09.450873   27348 main.go:141] libmachine: (ha-218762-m02)     <pae/>
	I0319 19:24:09.450884   27348 main.go:141] libmachine: (ha-218762-m02)     
	I0319 19:24:09.450892   27348 main.go:141] libmachine: (ha-218762-m02)   </features>
	I0319 19:24:09.450897   27348 main.go:141] libmachine: (ha-218762-m02)   <cpu mode='host-passthrough'>
	I0319 19:24:09.450904   27348 main.go:141] libmachine: (ha-218762-m02)   
	I0319 19:24:09.450909   27348 main.go:141] libmachine: (ha-218762-m02)   </cpu>
	I0319 19:24:09.450919   27348 main.go:141] libmachine: (ha-218762-m02)   <os>
	I0319 19:24:09.450924   27348 main.go:141] libmachine: (ha-218762-m02)     <type>hvm</type>
	I0319 19:24:09.450930   27348 main.go:141] libmachine: (ha-218762-m02)     <boot dev='cdrom'/>
	I0319 19:24:09.450937   27348 main.go:141] libmachine: (ha-218762-m02)     <boot dev='hd'/>
	I0319 19:24:09.450944   27348 main.go:141] libmachine: (ha-218762-m02)     <bootmenu enable='no'/>
	I0319 19:24:09.450950   27348 main.go:141] libmachine: (ha-218762-m02)   </os>
	I0319 19:24:09.450956   27348 main.go:141] libmachine: (ha-218762-m02)   <devices>
	I0319 19:24:09.450963   27348 main.go:141] libmachine: (ha-218762-m02)     <disk type='file' device='cdrom'>
	I0319 19:24:09.450972   27348 main.go:141] libmachine: (ha-218762-m02)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/boot2docker.iso'/>
	I0319 19:24:09.450979   27348 main.go:141] libmachine: (ha-218762-m02)       <target dev='hdc' bus='scsi'/>
	I0319 19:24:09.450986   27348 main.go:141] libmachine: (ha-218762-m02)       <readonly/>
	I0319 19:24:09.450994   27348 main.go:141] libmachine: (ha-218762-m02)     </disk>
	I0319 19:24:09.450999   27348 main.go:141] libmachine: (ha-218762-m02)     <disk type='file' device='disk'>
	I0319 19:24:09.451008   27348 main.go:141] libmachine: (ha-218762-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:24:09.451016   27348 main.go:141] libmachine: (ha-218762-m02)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/ha-218762-m02.rawdisk'/>
	I0319 19:24:09.451024   27348 main.go:141] libmachine: (ha-218762-m02)       <target dev='hda' bus='virtio'/>
	I0319 19:24:09.451030   27348 main.go:141] libmachine: (ha-218762-m02)     </disk>
	I0319 19:24:09.451034   27348 main.go:141] libmachine: (ha-218762-m02)     <interface type='network'>
	I0319 19:24:09.451043   27348 main.go:141] libmachine: (ha-218762-m02)       <source network='mk-ha-218762'/>
	I0319 19:24:09.451047   27348 main.go:141] libmachine: (ha-218762-m02)       <model type='virtio'/>
	I0319 19:24:09.451054   27348 main.go:141] libmachine: (ha-218762-m02)     </interface>
	I0319 19:24:09.451059   27348 main.go:141] libmachine: (ha-218762-m02)     <interface type='network'>
	I0319 19:24:09.451066   27348 main.go:141] libmachine: (ha-218762-m02)       <source network='default'/>
	I0319 19:24:09.451073   27348 main.go:141] libmachine: (ha-218762-m02)       <model type='virtio'/>
	I0319 19:24:09.451078   27348 main.go:141] libmachine: (ha-218762-m02)     </interface>
	I0319 19:24:09.451085   27348 main.go:141] libmachine: (ha-218762-m02)     <serial type='pty'>
	I0319 19:24:09.451090   27348 main.go:141] libmachine: (ha-218762-m02)       <target port='0'/>
	I0319 19:24:09.451097   27348 main.go:141] libmachine: (ha-218762-m02)     </serial>
	I0319 19:24:09.451101   27348 main.go:141] libmachine: (ha-218762-m02)     <console type='pty'>
	I0319 19:24:09.451106   27348 main.go:141] libmachine: (ha-218762-m02)       <target type='serial' port='0'/>
	I0319 19:24:09.451113   27348 main.go:141] libmachine: (ha-218762-m02)     </console>
	I0319 19:24:09.451118   27348 main.go:141] libmachine: (ha-218762-m02)     <rng model='virtio'>
	I0319 19:24:09.451123   27348 main.go:141] libmachine: (ha-218762-m02)       <backend model='random'>/dev/random</backend>
	I0319 19:24:09.451127   27348 main.go:141] libmachine: (ha-218762-m02)     </rng>
	I0319 19:24:09.451134   27348 main.go:141] libmachine: (ha-218762-m02)     
	I0319 19:24:09.451138   27348 main.go:141] libmachine: (ha-218762-m02)     
	I0319 19:24:09.451143   27348 main.go:141] libmachine: (ha-218762-m02)   </devices>
	I0319 19:24:09.451149   27348 main.go:141] libmachine: (ha-218762-m02) </domain>
	I0319 19:24:09.451155   27348 main.go:141] libmachine: (ha-218762-m02) 
	I0319 19:24:09.457818   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:d5:02:d3 in network default
	I0319 19:24:09.458321   27348 main.go:141] libmachine: (ha-218762-m02) Ensuring networks are active...
	I0319 19:24:09.458344   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:09.459008   27348 main.go:141] libmachine: (ha-218762-m02) Ensuring network default is active
	I0319 19:24:09.459273   27348 main.go:141] libmachine: (ha-218762-m02) Ensuring network mk-ha-218762 is active
	I0319 19:24:09.459575   27348 main.go:141] libmachine: (ha-218762-m02) Getting domain xml...
	I0319 19:24:09.460239   27348 main.go:141] libmachine: (ha-218762-m02) Creating domain...
	I0319 19:24:10.688079   27348 main.go:141] libmachine: (ha-218762-m02) Waiting to get IP...
	I0319 19:24:10.688985   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:10.689439   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:10.689466   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:10.689400   27723 retry.go:31] will retry after 241.907067ms: waiting for machine to come up
	I0319 19:24:10.932878   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:10.933368   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:10.933399   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:10.933310   27723 retry.go:31] will retry after 360.492289ms: waiting for machine to come up
	I0319 19:24:11.295858   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:11.296334   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:11.296356   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:11.296298   27723 retry.go:31] will retry after 348.561104ms: waiting for machine to come up
	I0319 19:24:11.646768   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:11.647236   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:11.647260   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:11.647200   27723 retry.go:31] will retry after 572.33675ms: waiting for machine to come up
	I0319 19:24:12.220627   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:12.221063   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:12.221087   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:12.221023   27723 retry.go:31] will retry after 640.071922ms: waiting for machine to come up
	I0319 19:24:12.862498   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:12.862911   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:12.862943   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:12.862871   27723 retry.go:31] will retry after 937.280979ms: waiting for machine to come up
	I0319 19:24:13.801386   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:13.801793   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:13.801823   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:13.801740   27723 retry.go:31] will retry after 1.122005935s: waiting for machine to come up
	I0319 19:24:14.925675   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:14.926034   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:14.926054   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:14.925998   27723 retry.go:31] will retry after 1.034147281s: waiting for machine to come up
	I0319 19:24:15.962135   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:15.962501   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:15.962542   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:15.962483   27723 retry.go:31] will retry after 1.788451935s: waiting for machine to come up
	I0319 19:24:17.753255   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:17.753608   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:17.753631   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:17.753563   27723 retry.go:31] will retry after 1.438912642s: waiting for machine to come up
	I0319 19:24:19.193815   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:19.194226   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:19.194259   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:19.194168   27723 retry.go:31] will retry after 2.023000789s: waiting for machine to come up
	I0319 19:24:21.219365   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:21.219772   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:21.219794   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:21.219734   27723 retry.go:31] will retry after 2.388284325s: waiting for machine to come up
	I0319 19:24:23.611079   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:23.611472   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:23.611500   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:23.611426   27723 retry.go:31] will retry after 3.691797958s: waiting for machine to come up
	I0319 19:24:27.306012   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:27.306427   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find current IP address of domain ha-218762-m02 in network mk-ha-218762
	I0319 19:24:27.306468   27348 main.go:141] libmachine: (ha-218762-m02) DBG | I0319 19:24:27.306376   27723 retry.go:31] will retry after 4.354456279s: waiting for machine to come up
	I0319 19:24:31.663824   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:31.664180   27348 main.go:141] libmachine: (ha-218762-m02) Found IP for machine: 192.168.39.234
	I0319 19:24:31.664208   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has current primary IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:31.664219   27348 main.go:141] libmachine: (ha-218762-m02) Reserving static IP address...
	I0319 19:24:31.664515   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find host DHCP lease matching {name: "ha-218762-m02", mac: "52:54:00:ab:0e:bd", ip: "192.168.39.234"} in network mk-ha-218762
	I0319 19:24:31.735183   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Getting to WaitForSSH function...
	I0319 19:24:31.735226   27348 main.go:141] libmachine: (ha-218762-m02) Reserved static IP address: 192.168.39.234
	I0319 19:24:31.735239   27348 main.go:141] libmachine: (ha-218762-m02) Waiting for SSH to be available...
	I0319 19:24:31.737749   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:31.738159   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762
	I0319 19:24:31.738184   27348 main.go:141] libmachine: (ha-218762-m02) DBG | unable to find defined IP address of network mk-ha-218762 interface with MAC address 52:54:00:ab:0e:bd
	I0319 19:24:31.738313   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH client type: external
	I0319 19:24:31.738332   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa (-rw-------)
	I0319 19:24:31.738360   27348 main.go:141] libmachine: (ha-218762-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:24:31.738367   27348 main.go:141] libmachine: (ha-218762-m02) DBG | About to run SSH command:
	I0319 19:24:31.738383   27348 main.go:141] libmachine: (ha-218762-m02) DBG | exit 0
	I0319 19:24:31.742259   27348 main.go:141] libmachine: (ha-218762-m02) DBG | SSH cmd err, output: exit status 255: 
	I0319 19:24:31.742283   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0319 19:24:31.742292   27348 main.go:141] libmachine: (ha-218762-m02) DBG | command : exit 0
	I0319 19:24:31.742304   27348 main.go:141] libmachine: (ha-218762-m02) DBG | err     : exit status 255
	I0319 19:24:31.742320   27348 main.go:141] libmachine: (ha-218762-m02) DBG | output  : 
	I0319 19:24:34.743284   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Getting to WaitForSSH function...
	I0319 19:24:34.745760   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.746143   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:34.746173   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.746355   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH client type: external
	I0319 19:24:34.746378   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa (-rw-------)
	I0319 19:24:34.746410   27348 main.go:141] libmachine: (ha-218762-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:24:34.746431   27348 main.go:141] libmachine: (ha-218762-m02) DBG | About to run SSH command:
	I0319 19:24:34.746451   27348 main.go:141] libmachine: (ha-218762-m02) DBG | exit 0
	I0319 19:24:34.868360   27348 main.go:141] libmachine: (ha-218762-m02) DBG | SSH cmd err, output: <nil>: 
	I0319 19:24:34.868642   27348 main.go:141] libmachine: (ha-218762-m02) KVM machine creation complete!
	I0319 19:24:34.868931   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetConfigRaw
	I0319 19:24:34.869449   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:34.869636   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:34.869818   27348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:24:34.869835   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:24:34.871080   27348 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:24:34.871093   27348 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:24:34.871098   27348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:24:34.871104   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:34.873305   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.873679   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:34.873708   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.873811   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:34.873984   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.874145   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.874303   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:34.874444   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:34.874628   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:34.874638   27348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:24:34.975552   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:24:34.975578   27348 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:24:34.975588   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:34.978298   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.978624   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:34.978649   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:34.978791   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:34.978979   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.979146   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:34.979280   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:34.979441   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:34.979593   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:34.979604   27348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:24:35.081422   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:24:35.081507   27348 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:24:35.081521   27348 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:24:35.081529   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:35.081784   27348 buildroot.go:166] provisioning hostname "ha-218762-m02"
	I0319 19:24:35.081805   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:35.082002   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.085929   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.086422   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.086493   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.086591   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.086804   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.087084   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.087286   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.087452   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:35.087605   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:35.087618   27348 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762-m02 && echo "ha-218762-m02" | sudo tee /etc/hostname
	I0319 19:24:35.204814   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762-m02
	
	I0319 19:24:35.204854   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.207405   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.207750   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.207778   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.207929   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.208117   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.208302   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.208466   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.208629   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:35.208784   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:35.208799   27348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:24:35.319044   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:24:35.319080   27348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:24:35.319100   27348 buildroot.go:174] setting up certificates
	I0319 19:24:35.319110   27348 provision.go:84] configureAuth start
	I0319 19:24:35.319123   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetMachineName
	I0319 19:24:35.319386   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:35.322159   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.322499   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.322528   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.322782   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.325377   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.325671   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.325697   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.325919   27348 provision.go:143] copyHostCerts
	I0319 19:24:35.325949   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:24:35.325978   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:24:35.325986   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:24:35.326048   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:24:35.326117   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:24:35.326133   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:24:35.326141   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:24:35.326162   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:24:35.326249   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:24:35.326270   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:24:35.326275   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:24:35.326309   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:24:35.326363   27348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762-m02 san=[127.0.0.1 192.168.39.234 ha-218762-m02 localhost minikube]
	I0319 19:24:35.537474   27348 provision.go:177] copyRemoteCerts
	I0319 19:24:35.537524   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:24:35.537546   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.540273   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.540583   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.540614   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.540773   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.540939   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.541086   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.541231   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:35.623138   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:24:35.623207   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0319 19:24:35.651170   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:24:35.651233   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 19:24:35.678948   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:24:35.679015   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:24:35.707837   27348 provision.go:87] duration metric: took 388.716414ms to configureAuth
	I0319 19:24:35.707870   27348 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:24:35.708019   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:35.708081   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.710589   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.710936   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.710974   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.711119   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.711290   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.711444   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.711609   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.711775   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:35.711935   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:35.711949   27348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:24:35.983586   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:24:35.983613   27348 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:24:35.983623   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetURL
	I0319 19:24:35.984709   27348 main.go:141] libmachine: (ha-218762-m02) DBG | Using libvirt version 6000000
	I0319 19:24:35.986477   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.986809   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.986830   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.987029   27348 main.go:141] libmachine: Docker is up and running!
	I0319 19:24:35.987045   27348 main.go:141] libmachine: Reticulating splines...
	I0319 19:24:35.987054   27348 client.go:171] duration metric: took 26.853037909s to LocalClient.Create
	I0319 19:24:35.987084   27348 start.go:167] duration metric: took 26.853098495s to libmachine.API.Create "ha-218762"
	I0319 19:24:35.987097   27348 start.go:293] postStartSetup for "ha-218762-m02" (driver="kvm2")
	I0319 19:24:35.987111   27348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:24:35.987128   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:35.987331   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:24:35.987353   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:35.989430   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.989742   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:35.989772   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:35.989894   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:35.990105   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:35.990262   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:35.990380   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:36.073266   27348 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:24:36.078444   27348 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:24:36.078470   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:24:36.078538   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:24:36.078623   27348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:24:36.078633   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:24:36.078704   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:24:36.089599   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:24:36.116537   27348 start.go:296] duration metric: took 129.427413ms for postStartSetup
	I0319 19:24:36.116576   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetConfigRaw
	I0319 19:24:36.117171   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:36.119370   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.119641   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.119660   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.119921   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:24:36.120134   27348 start.go:128] duration metric: took 27.002733312s to createHost
	I0319 19:24:36.120160   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:36.122149   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.122569   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.122595   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.122717   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:36.122848   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.122983   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.123089   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:36.123216   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:24:36.123372   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0319 19:24:36.123383   27348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:24:36.225736   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876276.199476383
	
	I0319 19:24:36.225762   27348 fix.go:216] guest clock: 1710876276.199476383
	I0319 19:24:36.225769   27348 fix.go:229] Guest: 2024-03-19 19:24:36.199476383 +0000 UTC Remote: 2024-03-19 19:24:36.120147227 +0000 UTC m=+82.586802676 (delta=79.329156ms)
	I0319 19:24:36.225782   27348 fix.go:200] guest clock delta is within tolerance: 79.329156ms
	I0319 19:24:36.225787   27348 start.go:83] releasing machines lock for "ha-218762-m02", held for 27.10849928s
	I0319 19:24:36.225805   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.226085   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:36.228565   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.228943   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.228973   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.231315   27348 out.go:177] * Found network options:
	I0319 19:24:36.232788   27348 out.go:177]   - NO_PROXY=192.168.39.200
	W0319 19:24:36.234124   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:24:36.234156   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.234633   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.234824   27348 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:24:36.234905   27348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:24:36.234942   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	W0319 19:24:36.235027   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:24:36.235093   27348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:24:36.235114   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:24:36.237426   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.237744   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.237815   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.237849   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.237999   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:36.238060   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:36.238091   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:36.238193   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.238378   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:36.238387   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:24:36.238551   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:24:36.238547   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:36.238692   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:24:36.238816   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:24:36.472989   27348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:24:36.480468   27348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:24:36.480541   27348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:24:36.498734   27348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:24:36.498757   27348 start.go:494] detecting cgroup driver to use...
	I0319 19:24:36.498822   27348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:24:36.520118   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:24:36.538310   27348 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:24:36.538360   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:24:36.556254   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:24:36.573969   27348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:24:36.703237   27348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:24:36.840282   27348 docker.go:233] disabling docker service ...
	I0319 19:24:36.840349   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:24:36.857338   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:24:36.871851   27348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:24:37.007320   27348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:24:37.148570   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:24:37.174055   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:24:37.194852   27348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:24:37.194918   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.207083   27348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:24:37.207137   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.218504   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.229423   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.240393   27348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:24:37.252212   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.263942   27348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.283851   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:24:37.295634   27348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:24:37.305608   27348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:24:37.305660   27348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:24:37.319719   27348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:24:37.329851   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:24:37.463372   27348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:24:37.621609   27348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:24:37.621672   27348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:24:37.627757   27348 start.go:562] Will wait 60s for crictl version
	I0319 19:24:37.627813   27348 ssh_runner.go:195] Run: which crictl
	I0319 19:24:37.632007   27348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:24:37.670327   27348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:24:37.670388   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:24:37.704916   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:24:37.736656   27348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:24:37.738080   27348 out.go:177]   - env NO_PROXY=192.168.39.200
	I0319 19:24:37.739409   27348 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:24:37.742006   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:37.742358   27348 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:24:25 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:24:37.742384   27348 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:24:37.742616   27348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:24:37.747089   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:24:37.761455   27348 mustload.go:65] Loading cluster: ha-218762
	I0319 19:24:37.761674   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:24:37.761928   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:37.761952   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:37.776184   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0319 19:24:37.776575   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:37.777040   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:37.777065   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:37.777436   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:37.777649   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:24:37.779012   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:24:37.779275   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:24:37.779299   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:24:37.792981   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0319 19:24:37.793405   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:24:37.793840   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:24:37.793860   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:24:37.794135   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:24:37.794317   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:24:37.794474   27348 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.234
	I0319 19:24:37.794486   27348 certs.go:194] generating shared ca certs ...
	I0319 19:24:37.794504   27348 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:37.794633   27348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:24:37.794684   27348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:24:37.794698   27348 certs.go:256] generating profile certs ...
	I0319 19:24:37.794778   27348 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:24:37.794808   27348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190
	I0319 19:24:37.794829   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.254]
	I0319 19:24:38.041687   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190 ...
	I0319 19:24:38.041715   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190: {Name:mkdc5aa372770cfba177067290e99c812165411e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:38.041896   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190 ...
	I0319 19:24:38.041914   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190: {Name:mkb3673a763c650724d08648f73a648066a45f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:24:38.042006   27348 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.5c194190 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:24:38.042147   27348 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.5c194190 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:24:38.042302   27348 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:24:38.042319   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:24:38.042336   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:24:38.042358   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:24:38.042378   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:24:38.042396   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:24:38.042411   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:24:38.042429   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:24:38.042447   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:24:38.042518   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:24:38.042557   27348 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:24:38.042571   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:24:38.042609   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:24:38.042646   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:24:38.042676   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:24:38.042727   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:24:38.042767   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.042793   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.042811   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.042849   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:24:38.045841   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:38.046258   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:24:38.046283   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:24:38.046435   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:24:38.046614   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:24:38.046756   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:24:38.046863   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:24:38.124603   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0319 19:24:38.129963   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0319 19:24:38.145656   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0319 19:24:38.151704   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0319 19:24:38.163237   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0319 19:24:38.168079   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0319 19:24:38.179223   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0319 19:24:38.183958   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0319 19:24:38.195603   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0319 19:24:38.200209   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0319 19:24:38.212130   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0319 19:24:38.216803   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0319 19:24:38.228914   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:24:38.257457   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:24:38.283417   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:24:38.308971   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:24:38.334720   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0319 19:24:38.360044   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 19:24:38.385284   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:24:38.410479   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:24:38.437042   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:24:38.463073   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:24:38.489289   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:24:38.514924   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0319 19:24:38.533209   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0319 19:24:38.551406   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0319 19:24:38.569691   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0319 19:24:38.588065   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0319 19:24:38.606205   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0319 19:24:38.625137   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0319 19:24:38.643070   27348 ssh_runner.go:195] Run: openssl version
	I0319 19:24:38.649680   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:24:38.661801   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.666659   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.666698   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:24:38.672672   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:24:38.684564   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:24:38.697224   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.702172   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.702217   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:24:38.709835   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:24:38.723062   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:24:38.735260   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.740020   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.740061   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:24:38.746062   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:24:38.758243   27348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:24:38.762688   27348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:24:38.762732   27348 kubeadm.go:928] updating node {m02 192.168.39.234 8443 v1.29.3 crio true true} ...
	I0319 19:24:38.762800   27348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:24:38.762824   27348 kube-vip.go:111] generating kube-vip config ...
	I0319 19:24:38.762851   27348 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:24:38.781545   27348 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:24:38.781613   27348 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:24:38.781664   27348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:24:38.792816   27348 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0319 19:24:38.792873   27348 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0319 19:24:38.803939   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0319 19:24:38.803957   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:24:38.804021   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:24:38.804095   27348 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0319 19:24:38.804125   27348 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0319 19:24:38.809979   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0319 19:24:38.810004   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0319 19:24:40.428665   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:24:40.444708   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:24:40.444821   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:24:40.449895   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0319 19:24:40.449924   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0319 19:25:09.544171   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:25:09.544299   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:25:09.550148   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0319 19:25:09.550189   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0319 19:25:09.800736   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0319 19:25:09.811056   27348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0319 19:25:09.829439   27348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:25:09.847650   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:25:09.866626   27348 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:25:09.871327   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:25:09.885218   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:10.015756   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:25:10.037115   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:25:10.037448   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:25:10.037480   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:25:10.051743   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0319 19:25:10.052142   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:25:10.052642   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:25:10.052666   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:25:10.052995   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:25:10.053205   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:25:10.053369   27348 start.go:316] joinCluster: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:25:10.053452   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0319 19:25:10.053471   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:25:10.056164   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:25:10.056571   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:25:10.056596   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:25:10.056765   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:25:10.056932   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:25:10.057104   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:25:10.057254   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:25:10.233803   27348 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:10.233856   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka8sqy.qmlnlfdjfipv0qxg --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m02 --control-plane --apiserver-advertise-address=192.168.39.234 --apiserver-bind-port=8443"
	I0319 19:25:33.787794   27348 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ka8sqy.qmlnlfdjfipv0qxg --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m02 --control-plane --apiserver-advertise-address=192.168.39.234 --apiserver-bind-port=8443": (23.553916446s)
	I0319 19:25:33.787824   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0319 19:25:34.498082   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-218762-m02 minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=ha-218762 minikube.k8s.io/primary=false
	I0319 19:25:34.657656   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-218762-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0319 19:25:34.797010   27348 start.go:318] duration metric: took 24.743639757s to joinCluster
	I0319 19:25:34.797098   27348 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:34.798969   27348 out.go:177] * Verifying Kubernetes components...
	I0319 19:25:34.797418   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:25:34.800398   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:35.092809   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:25:35.161141   27348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:25:35.161357   27348 kapi.go:59] client config for ha-218762: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt", KeyFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key", CAFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0319 19:25:35.161410   27348 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.200:8443
	I0319 19:25:35.161625   27348 node_ready.go:35] waiting up to 6m0s for node "ha-218762-m02" to be "Ready" ...
	I0319 19:25:35.161695   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:35.161702   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:35.161709   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:35.161712   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:35.182893   27348 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0319 19:25:35.662804   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:35.662830   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:35.662842   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:35.662847   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:35.667671   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:36.162832   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:36.162852   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:36.162861   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:36.162866   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:36.167304   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:36.662466   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:36.662489   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:36.662496   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:36.662500   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:36.671800   27348 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0319 19:25:37.161835   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:37.161865   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:37.161876   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:37.161882   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:37.165542   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:37.166376   27348 node_ready.go:53] node "ha-218762-m02" has status "Ready":"False"
	I0319 19:25:37.662711   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:37.662731   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:37.662739   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:37.662743   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:37.669135   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:25:38.162109   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:38.162128   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:38.162135   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:38.162140   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:38.165412   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:38.662046   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:38.662070   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:38.662081   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:38.662088   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:38.666467   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:39.162583   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:39.162600   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:39.162608   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:39.162613   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:39.166383   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:39.167473   27348 node_ready.go:53] node "ha-218762-m02" has status "Ready":"False"
	I0319 19:25:39.661854   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:39.661879   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:39.661889   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:39.661894   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:39.665709   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:40.162423   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:40.162447   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:40.162457   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:40.162463   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:40.166419   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:40.662769   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:40.662795   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:40.662806   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:40.662810   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:40.667756   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:41.161947   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:41.161971   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:41.162001   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:41.162006   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:41.165402   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:41.662723   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:41.662747   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:41.662760   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:41.662766   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:41.665848   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:41.666654   27348 node_ready.go:53] node "ha-218762-m02" has status "Ready":"False"
	I0319 19:25:42.161999   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:42.162017   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:42.162025   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:42.162029   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:42.165927   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:42.662122   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:42.662141   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:42.662152   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:42.662157   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:42.669266   27348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0319 19:25:43.162279   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:43.162298   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.162306   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.162310   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.166625   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:43.167248   27348 node_ready.go:49] node "ha-218762-m02" has status "Ready":"True"
	I0319 19:25:43.167267   27348 node_ready.go:38] duration metric: took 8.005626429s for node "ha-218762-m02" to be "Ready" ...
	I0319 19:25:43.167276   27348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:25:43.167336   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:43.167347   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.167354   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.167358   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.171740   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:43.178089   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.178162   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-6f64w
	I0319 19:25:43.178177   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.178188   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.178194   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.181350   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.182042   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:43.182061   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.182070   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.182075   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.189035   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:25:43.189521   27348 pod_ready.go:92] pod "coredns-76f75df574-6f64w" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:43.189536   27348 pod_ready.go:81] duration metric: took 11.427462ms for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.189545   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.189588   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zlz9l
	I0319 19:25:43.189598   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.189604   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.189609   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.192695   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.194082   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:43.194095   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.194102   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.194105   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.196683   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.197232   27348 pod_ready.go:92] pod "coredns-76f75df574-zlz9l" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:43.197245   27348 pod_ready.go:81] duration metric: took 7.694962ms for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.197256   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.197308   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762
	I0319 19:25:43.197319   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.197328   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.197337   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.200293   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.201025   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:43.201041   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.201049   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.201055   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.203397   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.203942   27348 pod_ready.go:92] pod "etcd-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:43.203958   27348 pod_ready.go:81] duration metric: took 6.695486ms for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.203969   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:43.204057   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:43.204071   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.204080   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.204085   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.207034   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:43.207553   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:43.207567   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.207574   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.207577   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.210905   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.704715   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:43.704734   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.704741   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.704745   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.707950   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:43.708724   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:43.708742   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:43.708749   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:43.708753   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:43.711524   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:44.204920   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:44.204940   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.204948   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.204952   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.208483   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:44.209381   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:44.209397   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.209408   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.209415   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.212193   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:44.704194   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:44.704221   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.704236   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.704243   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.708644   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:44.709827   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:44.709846   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:44.709856   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:44.709862   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:44.715011   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:25:45.204432   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:45.204454   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.204462   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.204467   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.208183   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:45.209578   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:45.209597   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.209608   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.209612   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.212840   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:45.213404   27348 pod_ready.go:102] pod "etcd-ha-218762-m02" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:45.704850   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:45.704877   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.704886   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.704893   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.708354   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:45.709104   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:45.709117   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:45.709124   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:45.709130   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:45.711803   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.204411   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:46.204432   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.204440   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.204444   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.207897   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:46.208700   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:46.208723   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.208733   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.208738   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.211254   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.704303   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:25:46.704327   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.704336   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.704340   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.707594   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:46.708528   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:46.708544   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.708551   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.708555   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.711277   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.711792   27348 pod_ready.go:92] pod "etcd-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:46.711812   27348 pod_ready.go:81] duration metric: took 3.507834904s for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.711830   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.711885   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762
	I0319 19:25:46.711896   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.711905   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.711913   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.714566   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.715234   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:46.715247   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.715254   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.715257   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.717661   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.718248   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:46.718262   27348 pod_ready.go:81] duration metric: took 6.423275ms for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.718270   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.718309   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m02
	I0319 19:25:46.718318   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.718324   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.718328   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.721146   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:46.763034   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:46.763059   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.763066   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.763070   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.766397   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:46.767053   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:46.767070   27348 pod_ready.go:81] duration metric: took 48.795064ms for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.767080   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:46.962439   27348 request.go:629] Waited for 195.287831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:25:46.962500   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:25:46.962507   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:46.962519   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:46.962528   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:46.966054   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.163299   27348 request.go:629] Waited for 196.368877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:47.163347   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:47.163353   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.163360   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.163371   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.166980   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.167805   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:47.167822   27348 pod_ready.go:81] duration metric: took 400.736228ms for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.167832   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.363074   27348 request.go:629] Waited for 195.190772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:25:47.363123   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:25:47.363127   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.363135   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.363139   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.367498   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:25:47.562667   27348 request.go:629] Waited for 194.34216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.562723   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.562730   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.562745   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.562757   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.565715   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:47.566492   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:47.566512   27348 pod_ready.go:81] duration metric: took 398.672611ms for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.566525   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.762691   27348 request.go:629] Waited for 196.105188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:25:47.762740   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:25:47.762745   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.762752   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.762756   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.766268   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.963224   27348 request.go:629] Waited for 195.69257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.963292   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:47.963298   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:47.963305   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:47.963309   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:47.966811   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:47.967555   27348 pod_ready.go:92] pod "kube-proxy-9q4nx" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:47.967572   27348 pod_ready.go:81] duration metric: took 401.040932ms for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:47.967580   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.162725   27348 request.go:629] Waited for 195.082384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:25:48.162808   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:25:48.162819   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.162831   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.162843   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.166291   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:48.362494   27348 request.go:629] Waited for 195.31511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.362542   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.362547   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.362554   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.362559   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.365792   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:48.366668   27348 pod_ready.go:92] pod "kube-proxy-qd8kk" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:48.366687   27348 pod_ready.go:81] duration metric: took 399.101448ms for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.366696   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.562806   27348 request.go:629] Waited for 196.046058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:25:48.562891   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:25:48.562899   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.562911   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.562918   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.568798   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:25:48.763166   27348 request.go:629] Waited for 193.493235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.763221   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:25:48.763226   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.763233   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.763237   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.767000   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:48.768213   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:48.768228   27348 pod_ready.go:81] duration metric: took 401.526784ms for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.768243   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:48.962362   27348 request.go:629] Waited for 194.037483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:25:48.962435   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:25:48.962442   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:48.962459   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:48.962466   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:48.966428   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.162504   27348 request.go:629] Waited for 195.350806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:49.162548   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:25:49.162553   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.162560   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.162580   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.166179   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.166738   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:25:49.166754   27348 pod_ready.go:81] duration metric: took 398.50231ms for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:25:49.166767   27348 pod_ready.go:38] duration metric: took 5.999479071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:25:49.166790   27348 api_server.go:52] waiting for apiserver process to appear ...
	I0319 19:25:49.166842   27348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:25:49.185557   27348 api_server.go:72] duration metric: took 14.388418616s to wait for apiserver process to appear ...
	I0319 19:25:49.185578   27348 api_server.go:88] waiting for apiserver healthz status ...
	I0319 19:25:49.185592   27348 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0319 19:25:49.189987   27348 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0319 19:25:49.190050   27348 round_trippers.go:463] GET https://192.168.39.200:8443/version
	I0319 19:25:49.190061   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.190072   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.190085   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.192774   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:25:49.193056   27348 api_server.go:141] control plane version: v1.29.3
	I0319 19:25:49.193080   27348 api_server.go:131] duration metric: took 7.495817ms to wait for apiserver health ...
	I0319 19:25:49.193088   27348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 19:25:49.362388   27348 request.go:629] Waited for 169.242058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.362478   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.362489   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.362499   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.362509   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.368327   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:25:49.373508   27348 system_pods.go:59] 17 kube-system pods found
	I0319 19:25:49.373533   27348 system_pods.go:61] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:25:49.373538   27348 system_pods.go:61] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:25:49.373541   27348 system_pods.go:61] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:25:49.373546   27348 system_pods.go:61] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:25:49.373549   27348 system_pods.go:61] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:25:49.373552   27348 system_pods.go:61] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:25:49.373555   27348 system_pods.go:61] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:25:49.373559   27348 system_pods.go:61] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:25:49.373562   27348 system_pods.go:61] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:25:49.373565   27348 system_pods.go:61] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:25:49.373568   27348 system_pods.go:61] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:25:49.373570   27348 system_pods.go:61] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:25:49.373573   27348 system_pods.go:61] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:25:49.373579   27348 system_pods.go:61] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:25:49.373582   27348 system_pods.go:61] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:25:49.373584   27348 system_pods.go:61] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:25:49.373587   27348 system_pods.go:61] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:25:49.373592   27348 system_pods.go:74] duration metric: took 180.499021ms to wait for pod list to return data ...
	I0319 19:25:49.373601   27348 default_sa.go:34] waiting for default service account to be created ...
	I0319 19:25:49.563023   27348 request.go:629] Waited for 189.357435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:25:49.563077   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:25:49.563082   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.563090   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.563095   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.566918   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.567127   27348 default_sa.go:45] found service account: "default"
	I0319 19:25:49.567143   27348 default_sa.go:55] duration metric: took 193.536936ms for default service account to be created ...
	I0319 19:25:49.567151   27348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 19:25:49.762527   27348 request.go:629] Waited for 195.314439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.762604   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:25:49.762612   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.762621   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.762629   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.769072   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:25:49.775148   27348 system_pods.go:86] 17 kube-system pods found
	I0319 19:25:49.775172   27348 system_pods.go:89] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:25:49.775178   27348 system_pods.go:89] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:25:49.775181   27348 system_pods.go:89] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:25:49.775186   27348 system_pods.go:89] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:25:49.775189   27348 system_pods.go:89] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:25:49.775193   27348 system_pods.go:89] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:25:49.775196   27348 system_pods.go:89] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:25:49.775202   27348 system_pods.go:89] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:25:49.775208   27348 system_pods.go:89] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:25:49.775214   27348 system_pods.go:89] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:25:49.775223   27348 system_pods.go:89] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:25:49.775234   27348 system_pods.go:89] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:25:49.775249   27348 system_pods.go:89] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:25:49.775255   27348 system_pods.go:89] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:25:49.775259   27348 system_pods.go:89] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:25:49.775266   27348 system_pods.go:89] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:25:49.775272   27348 system_pods.go:89] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:25:49.775282   27348 system_pods.go:126] duration metric: took 208.126231ms to wait for k8s-apps to be running ...
	I0319 19:25:49.775290   27348 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 19:25:49.775338   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:25:49.793710   27348 system_svc.go:56] duration metric: took 18.411138ms WaitForService to wait for kubelet
	I0319 19:25:49.793746   27348 kubeadm.go:576] duration metric: took 14.996608122s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:25:49.793771   27348 node_conditions.go:102] verifying NodePressure condition ...
	I0319 19:25:49.963171   27348 request.go:629] Waited for 169.333432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0319 19:25:49.963267   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0319 19:25:49.963280   27348 round_trippers.go:469] Request Headers:
	I0319 19:25:49.963291   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:25:49.963294   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:25:49.967076   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:25:49.967857   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:25:49.967876   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:25:49.967886   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:25:49.967890   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:25:49.967895   27348 node_conditions.go:105] duration metric: took 174.117975ms to run NodePressure ...
	I0319 19:25:49.967905   27348 start.go:240] waiting for startup goroutines ...
	I0319 19:25:49.967926   27348 start.go:254] writing updated cluster config ...
	I0319 19:25:49.970258   27348 out.go:177] 
	I0319 19:25:49.972154   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:25:49.972283   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:25:49.973965   27348 out.go:177] * Starting "ha-218762-m03" control-plane node in "ha-218762" cluster
	I0319 19:25:49.975069   27348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:25:49.975087   27348 cache.go:56] Caching tarball of preloaded images
	I0319 19:25:49.975176   27348 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:25:49.975188   27348 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:25:49.975280   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:25:49.975459   27348 start.go:360] acquireMachinesLock for ha-218762-m03: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:25:49.975507   27348 start.go:364] duration metric: took 25.079µs to acquireMachinesLock for "ha-218762-m03"
	I0319 19:25:49.975530   27348 start.go:93] Provisioning new machine with config: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:49.975628   27348 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0319 19:25:49.977206   27348 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 19:25:49.977288   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:25:49.977325   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:25:49.991624   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39853
	I0319 19:25:49.992012   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:25:49.992436   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:25:49.992454   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:25:49.992764   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:25:49.992974   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:25:49.993124   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:25:49.993270   27348 start.go:159] libmachine.API.Create for "ha-218762" (driver="kvm2")
	I0319 19:25:49.993292   27348 client.go:168] LocalClient.Create starting
	I0319 19:25:49.993317   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 19:25:49.993344   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:25:49.993357   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:25:49.993409   27348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 19:25:49.993428   27348 main.go:141] libmachine: Decoding PEM data...
	I0319 19:25:49.993441   27348 main.go:141] libmachine: Parsing certificate...
	I0319 19:25:49.993459   27348 main.go:141] libmachine: Running pre-create checks...
	I0319 19:25:49.993466   27348 main.go:141] libmachine: (ha-218762-m03) Calling .PreCreateCheck
	I0319 19:25:49.993637   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetConfigRaw
	I0319 19:25:49.994007   27348 main.go:141] libmachine: Creating machine...
	I0319 19:25:49.994020   27348 main.go:141] libmachine: (ha-218762-m03) Calling .Create
	I0319 19:25:49.994160   27348 main.go:141] libmachine: (ha-218762-m03) Creating KVM machine...
	I0319 19:25:49.995282   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found existing default KVM network
	I0319 19:25:49.995401   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found existing private KVM network mk-ha-218762
	I0319 19:25:49.995556   27348 main.go:141] libmachine: (ha-218762-m03) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03 ...
	I0319 19:25:49.995582   27348 main.go:141] libmachine: (ha-218762-m03) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:25:49.995625   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:49.995537   28122 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:25:49.995726   27348 main.go:141] libmachine: (ha-218762-m03) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 19:25:50.216991   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:50.216859   28122 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa...
	I0319 19:25:50.331847   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:50.331748   28122 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/ha-218762-m03.rawdisk...
	I0319 19:25:50.331870   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Writing magic tar header
	I0319 19:25:50.331887   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Writing SSH key tar header
	I0319 19:25:50.331963   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:50.331903   28122 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03 ...
	I0319 19:25:50.332068   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03 (perms=drwx------)
	I0319 19:25:50.332081   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03
	I0319 19:25:50.332088   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 19:25:50.332099   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 19:25:50.332105   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:25:50.332112   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 19:25:50.332127   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 19:25:50.332142   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 19:25:50.332162   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 19:25:50.332174   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 19:25:50.332180   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home/jenkins
	I0319 19:25:50.332188   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Checking permissions on dir: /home
	I0319 19:25:50.332193   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Skipping /home - not owner
	I0319 19:25:50.332202   27348 main.go:141] libmachine: (ha-218762-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 19:25:50.332216   27348 main.go:141] libmachine: (ha-218762-m03) Creating domain...
	I0319 19:25:50.333206   27348 main.go:141] libmachine: (ha-218762-m03) define libvirt domain using xml: 
	I0319 19:25:50.333233   27348 main.go:141] libmachine: (ha-218762-m03) <domain type='kvm'>
	I0319 19:25:50.333244   27348 main.go:141] libmachine: (ha-218762-m03)   <name>ha-218762-m03</name>
	I0319 19:25:50.333260   27348 main.go:141] libmachine: (ha-218762-m03)   <memory unit='MiB'>2200</memory>
	I0319 19:25:50.333268   27348 main.go:141] libmachine: (ha-218762-m03)   <vcpu>2</vcpu>
	I0319 19:25:50.333276   27348 main.go:141] libmachine: (ha-218762-m03)   <features>
	I0319 19:25:50.333284   27348 main.go:141] libmachine: (ha-218762-m03)     <acpi/>
	I0319 19:25:50.333294   27348 main.go:141] libmachine: (ha-218762-m03)     <apic/>
	I0319 19:25:50.333314   27348 main.go:141] libmachine: (ha-218762-m03)     <pae/>
	I0319 19:25:50.333326   27348 main.go:141] libmachine: (ha-218762-m03)     
	I0319 19:25:50.333333   27348 main.go:141] libmachine: (ha-218762-m03)   </features>
	I0319 19:25:50.333341   27348 main.go:141] libmachine: (ha-218762-m03)   <cpu mode='host-passthrough'>
	I0319 19:25:50.333346   27348 main.go:141] libmachine: (ha-218762-m03)   
	I0319 19:25:50.333353   27348 main.go:141] libmachine: (ha-218762-m03)   </cpu>
	I0319 19:25:50.333359   27348 main.go:141] libmachine: (ha-218762-m03)   <os>
	I0319 19:25:50.333363   27348 main.go:141] libmachine: (ha-218762-m03)     <type>hvm</type>
	I0319 19:25:50.333371   27348 main.go:141] libmachine: (ha-218762-m03)     <boot dev='cdrom'/>
	I0319 19:25:50.333376   27348 main.go:141] libmachine: (ha-218762-m03)     <boot dev='hd'/>
	I0319 19:25:50.333382   27348 main.go:141] libmachine: (ha-218762-m03)     <bootmenu enable='no'/>
	I0319 19:25:50.333394   27348 main.go:141] libmachine: (ha-218762-m03)   </os>
	I0319 19:25:50.333401   27348 main.go:141] libmachine: (ha-218762-m03)   <devices>
	I0319 19:25:50.333412   27348 main.go:141] libmachine: (ha-218762-m03)     <disk type='file' device='cdrom'>
	I0319 19:25:50.333423   27348 main.go:141] libmachine: (ha-218762-m03)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/boot2docker.iso'/>
	I0319 19:25:50.333431   27348 main.go:141] libmachine: (ha-218762-m03)       <target dev='hdc' bus='scsi'/>
	I0319 19:25:50.333436   27348 main.go:141] libmachine: (ha-218762-m03)       <readonly/>
	I0319 19:25:50.333443   27348 main.go:141] libmachine: (ha-218762-m03)     </disk>
	I0319 19:25:50.333449   27348 main.go:141] libmachine: (ha-218762-m03)     <disk type='file' device='disk'>
	I0319 19:25:50.333458   27348 main.go:141] libmachine: (ha-218762-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 19:25:50.333471   27348 main.go:141] libmachine: (ha-218762-m03)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/ha-218762-m03.rawdisk'/>
	I0319 19:25:50.333480   27348 main.go:141] libmachine: (ha-218762-m03)       <target dev='hda' bus='virtio'/>
	I0319 19:25:50.333488   27348 main.go:141] libmachine: (ha-218762-m03)     </disk>
	I0319 19:25:50.333493   27348 main.go:141] libmachine: (ha-218762-m03)     <interface type='network'>
	I0319 19:25:50.333501   27348 main.go:141] libmachine: (ha-218762-m03)       <source network='mk-ha-218762'/>
	I0319 19:25:50.333508   27348 main.go:141] libmachine: (ha-218762-m03)       <model type='virtio'/>
	I0319 19:25:50.333514   27348 main.go:141] libmachine: (ha-218762-m03)     </interface>
	I0319 19:25:50.333521   27348 main.go:141] libmachine: (ha-218762-m03)     <interface type='network'>
	I0319 19:25:50.333527   27348 main.go:141] libmachine: (ha-218762-m03)       <source network='default'/>
	I0319 19:25:50.333534   27348 main.go:141] libmachine: (ha-218762-m03)       <model type='virtio'/>
	I0319 19:25:50.333539   27348 main.go:141] libmachine: (ha-218762-m03)     </interface>
	I0319 19:25:50.333544   27348 main.go:141] libmachine: (ha-218762-m03)     <serial type='pty'>
	I0319 19:25:50.333556   27348 main.go:141] libmachine: (ha-218762-m03)       <target port='0'/>
	I0319 19:25:50.333569   27348 main.go:141] libmachine: (ha-218762-m03)     </serial>
	I0319 19:25:50.333587   27348 main.go:141] libmachine: (ha-218762-m03)     <console type='pty'>
	I0319 19:25:50.333605   27348 main.go:141] libmachine: (ha-218762-m03)       <target type='serial' port='0'/>
	I0319 19:25:50.333616   27348 main.go:141] libmachine: (ha-218762-m03)     </console>
	I0319 19:25:50.333626   27348 main.go:141] libmachine: (ha-218762-m03)     <rng model='virtio'>
	I0319 19:25:50.333637   27348 main.go:141] libmachine: (ha-218762-m03)       <backend model='random'>/dev/random</backend>
	I0319 19:25:50.333649   27348 main.go:141] libmachine: (ha-218762-m03)     </rng>
	I0319 19:25:50.333661   27348 main.go:141] libmachine: (ha-218762-m03)     
	I0319 19:25:50.333670   27348 main.go:141] libmachine: (ha-218762-m03)     
	I0319 19:25:50.333681   27348 main.go:141] libmachine: (ha-218762-m03)   </devices>
	I0319 19:25:50.333691   27348 main.go:141] libmachine: (ha-218762-m03) </domain>
	I0319 19:25:50.333703   27348 main.go:141] libmachine: (ha-218762-m03) 
	I0319 19:25:50.340864   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:b1:6c:94 in network default
	I0319 19:25:50.341477   27348 main.go:141] libmachine: (ha-218762-m03) Ensuring networks are active...
	I0319 19:25:50.341503   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:50.342231   27348 main.go:141] libmachine: (ha-218762-m03) Ensuring network default is active
	I0319 19:25:50.342629   27348 main.go:141] libmachine: (ha-218762-m03) Ensuring network mk-ha-218762 is active
	I0319 19:25:50.343095   27348 main.go:141] libmachine: (ha-218762-m03) Getting domain xml...
	I0319 19:25:50.343830   27348 main.go:141] libmachine: (ha-218762-m03) Creating domain...
	I0319 19:25:51.553932   27348 main.go:141] libmachine: (ha-218762-m03) Waiting to get IP...
	I0319 19:25:51.554758   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:51.555276   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:51.555306   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:51.555246   28122 retry.go:31] will retry after 284.654431ms: waiting for machine to come up
	I0319 19:25:51.841781   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:51.842213   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:51.842243   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:51.842162   28122 retry.go:31] will retry after 359.163065ms: waiting for machine to come up
	I0319 19:25:52.202706   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:52.203142   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:52.203171   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:52.203106   28122 retry.go:31] will retry after 305.30754ms: waiting for machine to come up
	I0319 19:25:52.510504   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:52.511008   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:52.511046   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:52.510981   28122 retry.go:31] will retry after 389.598505ms: waiting for machine to come up
	I0319 19:25:52.902345   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:52.902769   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:52.902792   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:52.902733   28122 retry.go:31] will retry after 706.518988ms: waiting for machine to come up
	I0319 19:25:53.610433   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:53.610863   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:53.610898   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:53.610818   28122 retry.go:31] will retry after 837.390706ms: waiting for machine to come up
	I0319 19:25:54.449569   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:54.449995   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:54.450022   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:54.449947   28122 retry.go:31] will retry after 1.115275188s: waiting for machine to come up
	I0319 19:25:55.567420   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:55.567784   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:55.567803   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:55.567738   28122 retry.go:31] will retry after 1.214137992s: waiting for machine to come up
	I0319 19:25:56.782933   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:56.783274   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:56.783322   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:56.783235   28122 retry.go:31] will retry after 1.594483272s: waiting for machine to come up
	I0319 19:25:58.378826   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:25:58.379221   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:25:58.379241   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:25:58.379186   28122 retry.go:31] will retry after 2.286199759s: waiting for machine to come up
	I0319 19:26:00.667332   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:00.667750   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:00.667816   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:00.667737   28122 retry.go:31] will retry after 1.954108791s: waiting for machine to come up
	I0319 19:26:02.622969   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:02.623461   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:02.623493   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:02.623410   28122 retry.go:31] will retry after 3.05464745s: waiting for machine to come up
	I0319 19:26:05.679695   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:05.680198   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:05.680214   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:05.680169   28122 retry.go:31] will retry after 2.868429032s: waiting for machine to come up
	I0319 19:26:08.550173   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:08.550630   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find current IP address of domain ha-218762-m03 in network mk-ha-218762
	I0319 19:26:08.550651   27348 main.go:141] libmachine: (ha-218762-m03) DBG | I0319 19:26:08.550599   28122 retry.go:31] will retry after 3.589077433s: waiting for machine to come up
	I0319 19:26:12.141536   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.142056   27348 main.go:141] libmachine: (ha-218762-m03) Found IP for machine: 192.168.39.15
	I0319 19:26:12.142075   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has current primary IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.142081   27348 main.go:141] libmachine: (ha-218762-m03) Reserving static IP address...
	I0319 19:26:12.142497   27348 main.go:141] libmachine: (ha-218762-m03) DBG | unable to find host DHCP lease matching {name: "ha-218762-m03", mac: "52:54:00:13:34:f4", ip: "192.168.39.15"} in network mk-ha-218762
	I0319 19:26:12.214606   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Getting to WaitForSSH function...
	I0319 19:26:12.214634   27348 main.go:141] libmachine: (ha-218762-m03) Reserved static IP address: 192.168.39.15
	I0319 19:26:12.214647   27348 main.go:141] libmachine: (ha-218762-m03) Waiting for SSH to be available...
	I0319 19:26:12.218432   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.218787   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.218818   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.218945   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Using SSH client type: external
	I0319 19:26:12.218973   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa (-rw-------)
	I0319 19:26:12.219019   27348 main.go:141] libmachine: (ha-218762-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 19:26:12.219037   27348 main.go:141] libmachine: (ha-218762-m03) DBG | About to run SSH command:
	I0319 19:26:12.219049   27348 main.go:141] libmachine: (ha-218762-m03) DBG | exit 0
	I0319 19:26:12.344778   27348 main.go:141] libmachine: (ha-218762-m03) DBG | SSH cmd err, output: <nil>: 
	I0319 19:26:12.345044   27348 main.go:141] libmachine: (ha-218762-m03) KVM machine creation complete!
	I0319 19:26:12.345383   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetConfigRaw
	I0319 19:26:12.345883   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:12.346058   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:12.346216   27348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 19:26:12.346229   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:26:12.347581   27348 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 19:26:12.347598   27348 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 19:26:12.347605   27348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 19:26:12.347615   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.349863   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.350216   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.350246   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.350379   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.350526   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.350644   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.350760   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.350906   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.351130   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.351142   27348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 19:26:12.460078   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:26:12.460108   27348 main.go:141] libmachine: Detecting the provisioner...
	I0319 19:26:12.460120   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.462835   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.463254   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.463283   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.463399   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.463593   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.463725   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.463921   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.464098   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.464288   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.464300   27348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 19:26:12.573659   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 19:26:12.573752   27348 main.go:141] libmachine: found compatible host: buildroot
	I0319 19:26:12.573767   27348 main.go:141] libmachine: Provisioning with buildroot...
	I0319 19:26:12.573777   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:26:12.574032   27348 buildroot.go:166] provisioning hostname "ha-218762-m03"
	I0319 19:26:12.574062   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:26:12.574253   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.576834   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.577126   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.577162   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.577301   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.577475   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.577636   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.577759   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.577897   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.578088   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.578104   27348 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762-m03 && echo "ha-218762-m03" | sudo tee /etc/hostname
	I0319 19:26:12.704916   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762-m03
	
	I0319 19:26:12.704997   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.707926   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.708306   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.708342   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.708604   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.708811   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.708970   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.709121   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.709275   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:12.709482   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:12.709500   27348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:26:12.831414   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:26:12.831441   27348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:26:12.831460   27348 buildroot.go:174] setting up certificates
	I0319 19:26:12.831470   27348 provision.go:84] configureAuth start
	I0319 19:26:12.831479   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetMachineName
	I0319 19:26:12.831730   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:12.834298   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.834620   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.834649   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.834762   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.836964   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.837332   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.837352   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.837490   27348 provision.go:143] copyHostCerts
	I0319 19:26:12.837521   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:26:12.837557   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:26:12.837574   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:26:12.837652   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:26:12.837737   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:26:12.837764   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:26:12.837774   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:26:12.837810   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:26:12.837875   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:26:12.837903   27348 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:26:12.837912   27348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:26:12.837945   27348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:26:12.838007   27348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762-m03 san=[127.0.0.1 192.168.39.15 ha-218762-m03 localhost minikube]
	I0319 19:26:12.934552   27348 provision.go:177] copyRemoteCerts
	I0319 19:26:12.934612   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:26:12.934639   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:12.936994   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.937362   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:12.937393   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:12.937614   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:12.937799   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:12.937972   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:12.938111   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.022899   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:26:13.022975   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:26:13.051348   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:26:13.051479   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0319 19:26:13.084387   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:26:13.084455   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 19:26:13.114451   27348 provision.go:87] duration metric: took 282.970424ms to configureAuth
	I0319 19:26:13.114475   27348 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:26:13.114700   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:26:13.114803   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.117440   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.117827   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.117858   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.118034   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.118209   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.118386   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.118525   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.118720   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:13.118877   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:13.118891   27348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:26:13.406083   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:26:13.406114   27348 main.go:141] libmachine: Checking connection to Docker...
	I0319 19:26:13.406124   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetURL
	I0319 19:26:13.407443   27348 main.go:141] libmachine: (ha-218762-m03) DBG | Using libvirt version 6000000
	I0319 19:26:13.409759   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.410173   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.410205   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.410365   27348 main.go:141] libmachine: Docker is up and running!
	I0319 19:26:13.410379   27348 main.go:141] libmachine: Reticulating splines...
	I0319 19:26:13.410387   27348 client.go:171] duration metric: took 23.417086044s to LocalClient.Create
	I0319 19:26:13.410415   27348 start.go:167] duration metric: took 23.417138448s to libmachine.API.Create "ha-218762"
	I0319 19:26:13.410428   27348 start.go:293] postStartSetup for "ha-218762-m03" (driver="kvm2")
	I0319 19:26:13.410445   27348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:26:13.410465   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.410681   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:26:13.410703   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.413029   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.413375   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.413432   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.413545   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.413712   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.413878   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.414049   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.500637   27348 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:26:13.505817   27348 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:26:13.505836   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:26:13.505890   27348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:26:13.505987   27348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:26:13.506001   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:26:13.506081   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:26:13.518153   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:26:13.547511   27348 start.go:296] duration metric: took 137.067109ms for postStartSetup
	I0319 19:26:13.547556   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetConfigRaw
	I0319 19:26:13.548127   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:13.550736   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.551082   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.551112   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.551340   27348 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:26:13.551514   27348 start.go:128] duration metric: took 23.575877277s to createHost
	I0319 19:26:13.551535   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.553622   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.554004   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.554024   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.554209   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.554386   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.554556   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.554698   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.554949   27348 main.go:141] libmachine: Using SSH client type: native
	I0319 19:26:13.555163   27348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0319 19:26:13.555178   27348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:26:13.661985   27348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876373.630389219
	
	I0319 19:26:13.662007   27348 fix.go:216] guest clock: 1710876373.630389219
	I0319 19:26:13.662014   27348 fix.go:229] Guest: 2024-03-19 19:26:13.630389219 +0000 UTC Remote: 2024-03-19 19:26:13.551525669 +0000 UTC m=+180.018181109 (delta=78.86355ms)
	I0319 19:26:13.662029   27348 fix.go:200] guest clock delta is within tolerance: 78.86355ms
	I0319 19:26:13.662037   27348 start.go:83] releasing machines lock for "ha-218762-m03", held for 23.68651518s
	I0319 19:26:13.662052   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.662326   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:13.664748   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.665095   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.665124   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.667018   27348 out.go:177] * Found network options:
	I0319 19:26:13.668287   27348 out.go:177]   - NO_PROXY=192.168.39.200,192.168.39.234
	W0319 19:26:13.669507   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	W0319 19:26:13.669530   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:26:13.669540   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.670019   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.670194   27348 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:26:13.670297   27348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:26:13.670352   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	W0319 19:26:13.670367   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	W0319 19:26:13.670393   27348 proxy.go:119] fail to check proxy env: Error ip not in block
	I0319 19:26:13.670460   27348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:26:13.670479   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:26:13.672959   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673176   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673292   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.673315   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673497   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.673613   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:13.673653   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:13.673682   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.673809   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:26:13.673874   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.674015   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:26:13.674007   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.674186   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:26:13.674341   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:26:13.928223   27348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:26:13.935179   27348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:26:13.935283   27348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:26:13.953260   27348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 19:26:13.953279   27348 start.go:494] detecting cgroup driver to use...
	I0319 19:26:13.953343   27348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:26:13.969520   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:26:13.984872   27348 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:26:13.985266   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:26:14.001173   27348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:26:14.015535   27348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:26:14.144819   27348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:26:14.324903   27348 docker.go:233] disabling docker service ...
	I0319 19:26:14.324974   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:26:14.339822   27348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:26:14.353753   27348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:26:14.489408   27348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:26:14.622800   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:26:14.639326   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:26:14.660342   27348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:26:14.660412   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.672540   27348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:26:14.672589   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.684564   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.696391   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.709007   27348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:26:14.721133   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.733438   27348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.752796   27348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:26:14.764903   27348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:26:14.776035   27348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 19:26:14.776089   27348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 19:26:14.794027   27348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:26:14.806482   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:26:14.943339   27348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:26:15.100311   27348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:26:15.100390   27348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:26:15.106098   27348 start.go:562] Will wait 60s for crictl version
	I0319 19:26:15.106151   27348 ssh_runner.go:195] Run: which crictl
	I0319 19:26:15.111443   27348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:26:15.157129   27348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:26:15.157193   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:26:15.186981   27348 ssh_runner.go:195] Run: crio --version
	I0319 19:26:15.226072   27348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:26:15.227676   27348 out.go:177]   - env NO_PROXY=192.168.39.200
	I0319 19:26:15.229271   27348 out.go:177]   - env NO_PROXY=192.168.39.200,192.168.39.234
	I0319 19:26:15.230655   27348 main.go:141] libmachine: (ha-218762-m03) Calling .GetIP
	I0319 19:26:15.233117   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:15.233496   27348 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:26:15.233528   27348 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:26:15.233703   27348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:26:15.238689   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:26:15.252544   27348 mustload.go:65] Loading cluster: ha-218762
	I0319 19:26:15.252808   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:26:15.253071   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:26:15.253106   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:26:15.268729   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I0319 19:26:15.269067   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:26:15.269517   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:26:15.269539   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:26:15.269857   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:26:15.270045   27348 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:26:15.271600   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:26:15.271925   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:26:15.271961   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:26:15.286223   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0319 19:26:15.286670   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:26:15.287128   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:26:15.287148   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:26:15.287462   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:26:15.287643   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:26:15.287787   27348 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.15
	I0319 19:26:15.287800   27348 certs.go:194] generating shared ca certs ...
	I0319 19:26:15.287817   27348 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:26:15.287938   27348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:26:15.287975   27348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:26:15.287984   27348 certs.go:256] generating profile certs ...
	I0319 19:26:15.288049   27348 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:26:15.288071   27348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953
	I0319 19:26:15.288085   27348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.15 192.168.39.254]
	I0319 19:26:15.441633   27348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953 ...
	I0319 19:26:15.441667   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953: {Name:mk13010d0a9c760f910acf1d4c93353a08108724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:26:15.441876   27348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953 ...
	I0319 19:26:15.441896   27348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953: {Name:mk956539c2e7a7a2a428fbbe80d4ebfa29546d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:26:15.441976   27348 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.12b12953 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:26:15.442099   27348 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.12b12953 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:26:15.442208   27348 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:26:15.442223   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:26:15.442236   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:26:15.442248   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:26:15.442261   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:26:15.442273   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:26:15.442285   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:26:15.442298   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:26:15.442310   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:26:15.442356   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:26:15.442384   27348 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:26:15.442394   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:26:15.442413   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:26:15.442432   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:26:15.442452   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:26:15.442487   27348 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:26:15.442510   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:26:15.442525   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:15.442537   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:26:15.442565   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:26:15.445504   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:15.445888   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:26:15.445918   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:15.446064   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:26:15.446252   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:26:15.446454   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:26:15.446592   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:26:15.524606   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0319 19:26:15.531378   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0319 19:26:15.544911   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0319 19:26:15.550115   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0319 19:26:15.562493   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0319 19:26:15.567246   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0319 19:26:15.579084   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0319 19:26:15.583737   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0319 19:26:15.596703   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0319 19:26:15.601709   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0319 19:26:15.615954   27348 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0319 19:26:15.622514   27348 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0319 19:26:15.634627   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:26:15.664312   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:26:15.692121   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:26:15.721511   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:26:15.750156   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0319 19:26:15.777506   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:26:15.804253   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:26:15.832217   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:26:15.860142   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:26:15.888811   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:26:15.916964   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:26:15.948323   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0319 19:26:15.967404   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0319 19:26:15.986570   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0319 19:26:16.005531   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0319 19:26:16.023989   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0319 19:26:16.043240   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0319 19:26:16.061549   27348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0319 19:26:16.079922   27348 ssh_runner.go:195] Run: openssl version
	I0319 19:26:16.086160   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:26:16.097602   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:26:16.102718   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:26:16.102761   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:26:16.109495   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:26:16.121439   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:26:16.133574   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:26:16.138951   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:26:16.139008   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:26:16.146150   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:26:16.159625   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:26:16.171388   27348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:16.176430   27348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:16.176480   27348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:26:16.183402   27348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:26:16.196048   27348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:26:16.201339   27348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 19:26:16.201398   27348 kubeadm.go:928] updating node {m03 192.168.39.15 8443 v1.29.3 crio true true} ...
	I0319 19:26:16.201542   27348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:26:16.201580   27348 kube-vip.go:111] generating kube-vip config ...
	I0319 19:26:16.201614   27348 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:26:16.218682   27348 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:26:16.218754   27348 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:26:16.218798   27348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:26:16.229760   27348 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0319 19:26:16.229810   27348 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0319 19:26:16.240616   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0319 19:26:16.240641   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:26:16.240648   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0319 19:26:16.240692   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:26:16.240706   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0319 19:26:16.240648   27348 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0319 19:26:16.240746   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:26:16.240808   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0319 19:26:16.259208   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0319 19:26:16.259247   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0319 19:26:16.259277   27348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:26:16.259326   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0319 19:26:16.259353   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0319 19:26:16.259368   27348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0319 19:26:16.295855   27348 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0319 19:26:16.295909   27348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0319 19:26:17.319764   27348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0319 19:26:17.330123   27348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0319 19:26:17.348726   27348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:26:17.369990   27348 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:26:17.388196   27348 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:26:17.392454   27348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:26:17.406789   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:26:17.545568   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:26:17.566583   27348 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:26:17.567026   27348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:26:17.567076   27348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:26:17.583385   27348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0319 19:26:17.583844   27348 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:26:17.584407   27348 main.go:141] libmachine: Using API Version  1
	I0319 19:26:17.584429   27348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:26:17.584834   27348 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:26:17.585046   27348 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:26:17.585246   27348 start.go:316] joinCluster: &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:26:17.585368   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0319 19:26:17.585392   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:26:17.588998   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:17.589437   27348 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:26:17.589464   27348 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:26:17.589673   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:26:17.589837   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:26:17.590005   27348 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:26:17.590151   27348 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:26:17.766429   27348 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:26:17.766497   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 52r08t.wwjtmsr7pzpgtkbh --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m03 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443"
	I0319 19:26:45.630807   27348 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 52r08t.wwjtmsr7pzpgtkbh --discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-218762-m03 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443": (27.864280885s)
	I0319 19:26:45.630853   27348 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0319 19:26:46.154388   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-218762-m03 minikube.k8s.io/updated_at=2024_03_19T19_26_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=ha-218762 minikube.k8s.io/primary=false
	I0319 19:26:46.298480   27348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-218762-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0319 19:26:46.420982   27348 start.go:318] duration metric: took 28.835732463s to joinCluster
	I0319 19:26:46.421054   27348 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:26:46.422736   27348 out.go:177] * Verifying Kubernetes components...
	I0319 19:26:46.421378   27348 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:26:46.424321   27348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:26:46.611052   27348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:26:46.631995   27348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:26:46.632245   27348 kapi.go:59] client config for ha-218762: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.crt", KeyFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key", CAFile:"/home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0319 19:26:46.632330   27348 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.200:8443
	I0319 19:26:46.632546   27348 node_ready.go:35] waiting up to 6m0s for node "ha-218762-m03" to be "Ready" ...
	I0319 19:26:46.632625   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:46.632636   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:46.632646   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:46.632652   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:46.639133   27348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0319 19:26:47.133488   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:47.133507   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:47.133515   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:47.133519   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:47.138654   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:26:47.633532   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:47.633558   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:47.633570   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:47.633577   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:47.637113   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:48.133352   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:48.133375   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:48.133393   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:48.133397   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:48.137756   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:48.633037   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:48.633058   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:48.633066   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:48.633071   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:48.636541   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:48.637623   27348 node_ready.go:53] node "ha-218762-m03" has status "Ready":"False"
	I0319 19:26:49.133413   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:49.133435   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:49.133442   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:49.133445   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:49.137088   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:49.632789   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:49.632812   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:49.632822   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:49.632829   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:49.636543   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:50.133628   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:50.133653   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:50.133664   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:50.133672   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:50.137730   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:50.632852   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:50.632878   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:50.632886   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:50.632891   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:50.637236   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:50.637967   27348 node_ready.go:53] node "ha-218762-m03" has status "Ready":"False"
	I0319 19:26:51.133457   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.133476   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.133484   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.133488   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.137938   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:51.138969   27348 node_ready.go:49] node "ha-218762-m03" has status "Ready":"True"
	I0319 19:26:51.138993   27348 node_ready.go:38] duration metric: took 4.506429244s for node "ha-218762-m03" to be "Ready" ...
	I0319 19:26:51.139004   27348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:26:51.139076   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:51.139091   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.139101   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.139137   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.147461   27348 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0319 19:26:51.155166   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.155255   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-6f64w
	I0319 19:26:51.155264   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.155275   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.155290   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.159040   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.159721   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:51.159732   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.159740   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.159745   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.162768   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.163370   27348 pod_ready.go:92] pod "coredns-76f75df574-6f64w" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.163396   27348 pod_ready.go:81] duration metric: took 8.210221ms for pod "coredns-76f75df574-6f64w" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.163409   27348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.163463   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zlz9l
	I0319 19:26:51.163478   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.163489   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.163498   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.166372   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:51.167572   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:51.167592   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.167602   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.167609   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.170689   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.172009   27348 pod_ready.go:92] pod "coredns-76f75df574-zlz9l" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.172028   27348 pod_ready.go:81] duration metric: took 8.611518ms for pod "coredns-76f75df574-zlz9l" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.172039   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.172097   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762
	I0319 19:26:51.172108   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.172126   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.172133   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.177503   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:26:51.178242   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:51.178262   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.178272   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.178281   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.181251   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:51.181795   27348 pod_ready.go:92] pod "etcd-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.181812   27348 pod_ready.go:81] duration metric: took 9.765614ms for pod "etcd-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.181824   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.181882   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m02
	I0319 19:26:51.181893   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.181904   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.181914   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.185047   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.185984   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:51.186000   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.186009   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.186018   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.188764   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:51.189465   27348 pod_ready.go:92] pod "etcd-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:51.189486   27348 pod_ready.go:81] duration metric: took 7.65385ms for pod "etcd-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.189497   27348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:51.333889   27348 request.go:629] Waited for 144.336477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:51.333963   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:51.333968   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.333976   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.333980   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.338092   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:51.533489   27348 request.go:629] Waited for 194.302117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.533558   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.533565   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.533575   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.533584   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.538258   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:51.734359   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:51.734384   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.734394   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.734398   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.737714   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:51.933950   27348 request.go:629] Waited for 195.350331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.934005   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:51.934012   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:51.934022   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:51.934030   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:51.938611   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:52.190370   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:52.190390   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.190398   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.190402   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.193935   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:52.334001   27348 request.go:629] Waited for 139.295294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:52.334055   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:52.334060   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.334069   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.334075   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.338182   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:52.690574   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:52.690597   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.690607   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.690612   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.694318   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:52.734356   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:52.734376   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:52.734385   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:52.734389   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:52.738601   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:53.190460   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/etcd-ha-218762-m03
	I0319 19:26:53.190480   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.190488   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.190492   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.193937   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.194715   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:53.194729   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.194738   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.194741   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.197532   27348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0319 19:26:53.198117   27348 pod_ready.go:92] pod "etcd-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:53.198141   27348 pod_ready.go:81] duration metric: took 2.008636s for pod "etcd-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.198166   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.334507   27348 request.go:629] Waited for 136.268277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762
	I0319 19:26:53.334588   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762
	I0319 19:26:53.334597   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.334604   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.334610   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.338596   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.533650   27348 request.go:629] Waited for 194.288619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:53.533713   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:53.533721   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.533737   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.533747   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.537207   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.537959   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:53.537976   27348 pod_ready.go:81] duration metric: took 339.79836ms for pod "kube-apiserver-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.537986   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.733766   27348 request.go:629] Waited for 195.72654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m02
	I0319 19:26:53.733838   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m02
	I0319 19:26:53.733843   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.733851   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.733858   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.737663   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.934080   27348 request.go:629] Waited for 195.399867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:53.934139   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:53.934150   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:53.934160   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:53.934174   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:53.938076   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:53.939070   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:53.939090   27348 pod_ready.go:81] duration metric: took 401.09864ms for pod "kube-apiserver-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:53.939100   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.134224   27348 request.go:629] Waited for 195.039747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m03
	I0319 19:26:54.134292   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-218762-m03
	I0319 19:26:54.134299   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.134309   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.134320   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.138294   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.333598   27348 request.go:629] Waited for 194.207635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:54.333660   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:54.333665   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.333673   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.333678   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.337576   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.338571   27348 pod_ready.go:92] pod "kube-apiserver-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:54.338596   27348 pod_ready.go:81] duration metric: took 399.487941ms for pod "kube-apiserver-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.338609   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.533566   27348 request.go:629] Waited for 194.895721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:26:54.533624   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762
	I0319 19:26:54.533641   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.533649   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.533653   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.537341   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.733576   27348 request.go:629] Waited for 195.281354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:54.733628   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:54.733633   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.733641   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.733644   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.737553   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:54.738615   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:54.738634   27348 pod_ready.go:81] duration metric: took 400.016617ms for pod "kube-controller-manager-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.738644   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:54.933582   27348 request.go:629] Waited for 194.869012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:26:54.933651   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m02
	I0319 19:26:54.933659   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:54.933683   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:54.933706   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:54.937347   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:55.133988   27348 request.go:629] Waited for 195.812982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.134054   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.134076   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.134087   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.134095   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.139642   27348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0319 19:26:55.141192   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:55.141210   27348 pod_ready.go:81] duration metric: took 402.559898ms for pod "kube-controller-manager-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.141219   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.334384   27348 request.go:629] Waited for 193.094247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m03
	I0319 19:26:55.334433   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-218762-m03
	I0319 19:26:55.334438   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.334446   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.334450   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.338550   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:55.533943   27348 request.go:629] Waited for 194.353574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:55.534041   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:55.534052   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.534063   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.534072   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.538659   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:55.539323   27348 pod_ready.go:92] pod "kube-controller-manager-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:55.539344   27348 pod_ready.go:81] duration metric: took 398.119009ms for pod "kube-controller-manager-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.539354   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.734474   27348 request.go:629] Waited for 195.058128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:26:55.734549   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q4nx
	I0319 19:26:55.734554   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.734562   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.734567   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.738483   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:55.933973   27348 request.go:629] Waited for 194.360084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.934022   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:55.934028   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:55.934035   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:55.934038   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:55.937737   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:55.938631   27348 pod_ready.go:92] pod "kube-proxy-9q4nx" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:55.938650   27348 pod_ready.go:81] duration metric: took 399.289929ms for pod "kube-proxy-9q4nx" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:55.938662   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lq48k" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.133721   27348 request.go:629] Waited for 194.974778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lq48k
	I0319 19:26:56.133783   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lq48k
	I0319 19:26:56.133794   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.133805   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.133816   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.138584   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:56.333975   27348 request.go:629] Waited for 194.387303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:56.334026   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:56.334031   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.334038   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.334042   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.338142   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:56.338829   27348 pod_ready.go:92] pod "kube-proxy-lq48k" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:56.338848   27348 pod_ready.go:81] duration metric: took 400.179335ms for pod "kube-proxy-lq48k" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.338861   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.534017   27348 request.go:629] Waited for 195.058484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:26:56.534068   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qd8kk
	I0319 19:26:56.534073   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.534080   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.534087   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.538077   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:56.734353   27348 request.go:629] Waited for 195.37726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:56.734405   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:56.734411   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.734422   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.734429   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.738349   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:56.739287   27348 pod_ready.go:92] pod "kube-proxy-qd8kk" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:56.739309   27348 pod_ready.go:81] duration metric: took 400.441531ms for pod "kube-proxy-qd8kk" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.739320   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:56.934400   27348 request.go:629] Waited for 195.013252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:26:56.934452   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762
	I0319 19:26:56.934457   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:56.934464   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:56.934468   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:56.938554   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:57.133706   27348 request.go:629] Waited for 194.293257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:57.133762   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762
	I0319 19:26:57.133769   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.133779   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.133784   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.137298   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:57.138027   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:57.138044   27348 pod_ready.go:81] duration metric: took 398.718431ms for pod "kube-scheduler-ha-218762" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.138053   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.334143   27348 request.go:629] Waited for 196.034183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:26:57.334211   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m02
	I0319 19:26:57.334220   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.334227   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.334234   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.338476   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:57.534511   27348 request.go:629] Waited for 195.364987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:57.534564   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m02
	I0319 19:26:57.534569   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.534576   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.534583   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.538686   27348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0319 19:26:57.539325   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:57.539342   27348 pod_ready.go:81] duration metric: took 401.283364ms for pod "kube-scheduler-ha-218762-m02" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.539351   27348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.733819   27348 request.go:629] Waited for 194.407592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m03
	I0319 19:26:57.733909   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-218762-m03
	I0319 19:26:57.733918   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.733928   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.733938   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.737717   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:57.934273   27348 request.go:629] Waited for 195.656121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:57.934338   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes/ha-218762-m03
	I0319 19:26:57.934344   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.934352   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.934360   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.937810   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:57.938582   27348 pod_ready.go:92] pod "kube-scheduler-ha-218762-m03" in "kube-system" namespace has status "Ready":"True"
	I0319 19:26:57.938603   27348 pod_ready.go:81] duration metric: took 399.245881ms for pod "kube-scheduler-ha-218762-m03" in "kube-system" namespace to be "Ready" ...
	I0319 19:26:57.938614   27348 pod_ready.go:38] duration metric: took 6.799598369s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 19:26:57.938628   27348 api_server.go:52] waiting for apiserver process to appear ...
	I0319 19:26:57.938674   27348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:26:57.958163   27348 api_server.go:72] duration metric: took 11.537075445s to wait for apiserver process to appear ...
	I0319 19:26:57.958185   27348 api_server.go:88] waiting for apiserver healthz status ...
	I0319 19:26:57.958205   27348 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0319 19:26:57.962762   27348 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0319 19:26:57.962812   27348 round_trippers.go:463] GET https://192.168.39.200:8443/version
	I0319 19:26:57.962817   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:57.962825   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:57.962830   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:57.963867   27348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0319 19:26:57.963914   27348 api_server.go:141] control plane version: v1.29.3
	I0319 19:26:57.963931   27348 api_server.go:131] duration metric: took 5.741178ms to wait for apiserver health ...
	I0319 19:26:57.963938   27348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 19:26:58.134365   27348 request.go:629] Waited for 170.338863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.134448   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.134455   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.134465   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.134476   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.142269   27348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0319 19:26:58.149759   27348 system_pods.go:59] 24 kube-system pods found
	I0319 19:26:58.149787   27348 system_pods.go:61] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:26:58.149791   27348 system_pods.go:61] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:26:58.149794   27348 system_pods.go:61] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:26:58.149797   27348 system_pods.go:61] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:26:58.149800   27348 system_pods.go:61] "etcd-ha-218762-m03" [abaf6f38-4d54-46a5-bf59-a31f3e170ff8] Running
	I0319 19:26:58.149803   27348 system_pods.go:61] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:26:58.149806   27348 system_pods.go:61] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:26:58.149809   27348 system_pods.go:61] "kindnet-wv72v" [1ed042d3-e756-4c78-8708-5c5879b8488a] Running
	I0319 19:26:58.149812   27348 system_pods.go:61] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:26:58.149815   27348 system_pods.go:61] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:26:58.149818   27348 system_pods.go:61] "kube-apiserver-ha-218762-m03" [41b039c5-b777-45ea-bceb-74b2536a8a0e] Running
	I0319 19:26:58.149821   27348 system_pods.go:61] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:26:58.149825   27348 system_pods.go:61] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:26:58.149828   27348 system_pods.go:61] "kube-controller-manager-ha-218762-m03" [7a3c20f3-8688-4ff9-b1c6-bf79af946890] Running
	I0319 19:26:58.149831   27348 system_pods.go:61] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:26:58.149835   27348 system_pods.go:61] "kube-proxy-lq48k" [276cdcac-8e8b-4521-9ef0-a83138baa085] Running
	I0319 19:26:58.149838   27348 system_pods.go:61] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:26:58.149841   27348 system_pods.go:61] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:26:58.149844   27348 system_pods.go:61] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:26:58.149847   27348 system_pods.go:61] "kube-scheduler-ha-218762-m03" [ebb4beba-a1e3-40fb-bc25-44ff5c2883b8] Running
	I0319 19:26:58.149850   27348 system_pods.go:61] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:26:58.149853   27348 system_pods.go:61] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:26:58.149855   27348 system_pods.go:61] "kube-vip-ha-218762-m03" [4892ef9c-057c-4361-bb82-f64de67babb0] Running
	I0319 19:26:58.149858   27348 system_pods.go:61] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:26:58.149863   27348 system_pods.go:74] duration metric: took 185.921106ms to wait for pod list to return data ...
	I0319 19:26:58.149873   27348 default_sa.go:34] waiting for default service account to be created ...
	I0319 19:26:58.334270   27348 request.go:629] Waited for 184.336644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:26:58.334325   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/default/serviceaccounts
	I0319 19:26:58.334331   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.334339   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.334344   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.338204   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:58.338318   27348 default_sa.go:45] found service account: "default"
	I0319 19:26:58.338334   27348 default_sa.go:55] duration metric: took 188.454354ms for default service account to be created ...
	I0319 19:26:58.338348   27348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 19:26:58.533645   27348 request.go:629] Waited for 195.200652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.533699   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/namespaces/kube-system/pods
	I0319 19:26:58.533704   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.533712   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.533715   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.541780   27348 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0319 19:26:58.548496   27348 system_pods.go:86] 24 kube-system pods found
	I0319 19:26:58.548527   27348 system_pods.go:89] "coredns-76f75df574-6f64w" [5b250bb2-07f0-46db-8e58-4584fbe4f882] Running
	I0319 19:26:58.548534   27348 system_pods.go:89] "coredns-76f75df574-zlz9l" [5fd420b7-5377-4b53-b5c3-4e785436bd9e] Running
	I0319 19:26:58.548541   27348 system_pods.go:89] "etcd-ha-218762" [11a35b59-7388-40ad-8c5b-e032d0d4a7cd] Running
	I0319 19:26:58.548546   27348 system_pods.go:89] "etcd-ha-218762-m02" [ef762c49-20e4-4a9d-8d09-f70921cc6385] Running
	I0319 19:26:58.548552   27348 system_pods.go:89] "etcd-ha-218762-m03" [abaf6f38-4d54-46a5-bf59-a31f3e170ff8] Running
	I0319 19:26:58.548557   27348 system_pods.go:89] "kindnet-4b7jg" [fc08f0ca-42c2-42ea-8ad1-29c99be7f86f] Running
	I0319 19:26:58.548564   27348 system_pods.go:89] "kindnet-d8pkw" [566eb397-5ea5-4bc5-af28-3c5e9a12346b] Running
	I0319 19:26:58.548570   27348 system_pods.go:89] "kindnet-wv72v" [1ed042d3-e756-4c78-8708-5c5879b8488a] Running
	I0319 19:26:58.548575   27348 system_pods.go:89] "kube-apiserver-ha-218762" [37a7b7a7-f2a6-40b0-a90e-c46b2f3d0d6a] Running
	I0319 19:26:58.548581   27348 system_pods.go:89] "kube-apiserver-ha-218762-m02" [ff26d88a-e999-4a6c-958b-b62391de8c26] Running
	I0319 19:26:58.548589   27348 system_pods.go:89] "kube-apiserver-ha-218762-m03" [41b039c5-b777-45ea-bceb-74b2536a8a0e] Running
	I0319 19:26:58.548595   27348 system_pods.go:89] "kube-controller-manager-ha-218762" [aaea730f-a87c-4fbf-8bf5-17bad832726c] Running
	I0319 19:26:58.548605   27348 system_pods.go:89] "kube-controller-manager-ha-218762-m02" [eb3ae994-e89e-4add-bf7d-4aa569d0e033] Running
	I0319 19:26:58.548611   27348 system_pods.go:89] "kube-controller-manager-ha-218762-m03" [7a3c20f3-8688-4ff9-b1c6-bf79af946890] Running
	I0319 19:26:58.548620   27348 system_pods.go:89] "kube-proxy-9q4nx" [4600f479-072e-4c04-97ac-8d230d71fee5] Running
	I0319 19:26:58.548626   27348 system_pods.go:89] "kube-proxy-lq48k" [276cdcac-8e8b-4521-9ef0-a83138baa085] Running
	I0319 19:26:58.548636   27348 system_pods.go:89] "kube-proxy-qd8kk" [5c7dcc06-c11b-4173-9b5b-49aef039c7ee] Running
	I0319 19:26:58.548642   27348 system_pods.go:89] "kube-scheduler-ha-218762" [4745d221-88bf-489b-9aab-ad1e41b3cc8d] Running
	I0319 19:26:58.548649   27348 system_pods.go:89] "kube-scheduler-ha-218762-m02" [c9edf9e8-b52e-4438-a3f9-3ff26fe72908] Running
	I0319 19:26:58.548655   27348 system_pods.go:89] "kube-scheduler-ha-218762-m03" [ebb4beba-a1e3-40fb-bc25-44ff5c2883b8] Running
	I0319 19:26:58.548665   27348 system_pods.go:89] "kube-vip-ha-218762" [d889098d-f271-4dcf-8dbc-e1cddbe35405] Running
	I0319 19:26:58.548670   27348 system_pods.go:89] "kube-vip-ha-218762-m02" [07727bb2-7ecd-4967-823f-3916e560ce53] Running
	I0319 19:26:58.548678   27348 system_pods.go:89] "kube-vip-ha-218762-m03" [4892ef9c-057c-4361-bb82-f64de67babb0] Running
	I0319 19:26:58.548683   27348 system_pods.go:89] "storage-provisioner" [6a496ada-aaf7-47a5-bd5d-5d909ef5df10] Running
	I0319 19:26:58.548694   27348 system_pods.go:126] duration metric: took 210.337822ms to wait for k8s-apps to be running ...
	I0319 19:26:58.548749   27348 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 19:26:58.548820   27348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:26:58.568949   27348 system_svc.go:56] duration metric: took 20.19424ms WaitForService to wait for kubelet
	I0319 19:26:58.568974   27348 kubeadm.go:576] duration metric: took 12.147890574s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:26:58.568993   27348 node_conditions.go:102] verifying NodePressure condition ...
	I0319 19:26:58.733856   27348 request.go:629] Waited for 164.801344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.200:8443/api/v1/nodes
	I0319 19:26:58.733948   27348 round_trippers.go:463] GET https://192.168.39.200:8443/api/v1/nodes
	I0319 19:26:58.733955   27348 round_trippers.go:469] Request Headers:
	I0319 19:26:58.733966   27348 round_trippers.go:473]     Accept: application/json, */*
	I0319 19:26:58.733973   27348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0319 19:26:58.737724   27348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0319 19:26:58.738825   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:26:58.738845   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:26:58.738854   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:26:58.738858   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:26:58.738863   27348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 19:26:58.738867   27348 node_conditions.go:123] node cpu capacity is 2
	I0319 19:26:58.738873   27348 node_conditions.go:105] duration metric: took 169.875397ms to run NodePressure ...
	I0319 19:26:58.738894   27348 start.go:240] waiting for startup goroutines ...
	I0319 19:26:58.738915   27348 start.go:254] writing updated cluster config ...
	I0319 19:26:58.739236   27348 ssh_runner.go:195] Run: rm -f paused
	I0319 19:26:58.793008   27348 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 19:26:58.795218   27348 out.go:177] * Done! kubectl is now configured to use "ha-218762" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.416553977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876687416517161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4deb8634-0cae-45fc-a980-7b6bc047de94 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.417424893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88876f62-e62a-4671-9a97-868aebc7c062 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.417475759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88876f62-e62a-4671-9a97-868aebc7c062 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.417731311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88876f62-e62a-4671-9a97-868aebc7c062 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.466109375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f55d3a2-fb7c-4d7f-a83d-6a4ea71002fb name=/runtime.v1.RuntimeService/Version
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.466181749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f55d3a2-fb7c-4d7f-a83d-6a4ea71002fb name=/runtime.v1.RuntimeService/Version
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.467315133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e9cb6c0-58ca-4073-ae81-f7332a06caca name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.468259012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876687468234029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e9cb6c0-58ca-4073-ae81-f7332a06caca name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.468763257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b0f0594-c514-45a9-a089-58e736cfdde5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.469017959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b0f0594-c514-45a9-a089-58e736cfdde5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.469397917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b0f0594-c514-45a9-a089-58e736cfdde5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.516584579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0493ccd-fad4-415f-9363-2d53ff88bbed name=/runtime.v1.RuntimeService/Version
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.516714276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0493ccd-fad4-415f-9363-2d53ff88bbed name=/runtime.v1.RuntimeService/Version
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.518618156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02762825-370f-4e91-9d2c-a1c434aa3d49 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.519568770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876687519532974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02762825-370f-4e91-9d2c-a1c434aa3d49 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.520298642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1b66780-52e6-4bcd-b228-2847cf39ff5d name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.520379545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1b66780-52e6-4bcd-b228-2847cf39ff5d name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.521383525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1b66780-52e6-4bcd-b228-2847cf39ff5d name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.569761970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe67ae2f-ca94-4626-913a-4740955526aa name=/runtime.v1.RuntimeService/Version
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.569919926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe67ae2f-ca94-4626-913a-4740955526aa name=/runtime.v1.RuntimeService/Version
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.572139806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5463ad9d-6eb5-4ce2-8e8a-c54a48d5de44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.572525742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710876687572503597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5463ad9d-6eb5-4ce2-8e8a-c54a48d5de44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.573114258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f35875b2-96da-4eae-a6a3-d3b3e29c6764 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.573200865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f35875b2-96da-4eae-a6a3-d3b3e29c6764 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:31:27 ha-218762 crio[681]: time="2024-03-19 19:31:27.573425964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876423582485464,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252812205121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876252773670101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851,PodSandboxId:fcb5bf156cf82773ebb05eedc615fbbddc1e435c2e4f1d77c17086d3b37d6213,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1710876251753478007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986,PodSandboxId:656b34459ad37ffda6bdafb3335f9850fa09f5f979857d33460456539a8327b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876
249906040011,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876249681284501,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8,PodSandboxId:b395ee7355871d83fbfe7eaab849951a088bffa10b741a411a0b6f12cbb10cf6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876232952633794,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a8b2f8fb53080a4dfc07522f9bab3e7,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876229364873285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda,PodSandboxId:32f987658f0995964f6a308eb67bb8a271f477f61c032d6f05e8fae6936637de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876229227569919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab,PodSandboxId:c9b47f6ddfd26987dae3098ce1f18922a2149a26c1a95c62d60b64fe5934c143,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876229211360072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876229129712911,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f35875b2-96da-4eae-a6a3-d3b3e29c6764 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d5224aff0311       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   03d5a8bf10dee       busybox-7fdf7869d9-d8xsk
	109c2437b7712       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   42b1b389a8129       coredns-76f75df574-6f64w
	4c1e36efc888a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   9e44b306f2e4f       coredns-76f75df574-zlz9l
	49e04c50e3c86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   fcb5bf156cf82       storage-provisioner
	ee8377d7b6d9a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   656b34459ad37       kindnet-d8pkw
	ab7b5d52d6006       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago       Running             kube-proxy                0                   c02a60ba78138       kube-proxy-qd8kk
	da2851243bc4c       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   b395ee7355871       kube-vip-ha-218762
	dc37df9447020       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   59a484b792912       etcd-ha-218762
	136b31ae3d992       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   32f987658f099       kube-controller-manager-ha-218762
	82c2c39ac3bd9       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   c9b47f6ddfd26       kube-apiserver-ha-218762
	b8f592d52269d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   ffe45f05ed53a       kube-scheduler-ha-218762
	
	
	==> coredns [109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57] <==
	[INFO] 10.244.1.2:58529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229048s
	[INFO] 10.244.1.2:43335 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000190217s
	[INFO] 10.244.1.2:52240 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000587827s
	[INFO] 10.244.2.2:40073 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000116489s
	[INFO] 10.244.2.2:56969 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001486663s
	[INFO] 10.244.0.4:33585 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003760519s
	[INFO] 10.244.0.4:59082 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137291s
	[INFO] 10.244.0.4:40935 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118623s
	[INFO] 10.244.0.4:47943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107248s
	[INFO] 10.244.0.4:59058 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076766s
	[INFO] 10.244.1.2:50311 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001848487s
	[INFO] 10.244.1.2:43198 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174765s
	[INFO] 10.244.1.2:52346 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001415553s
	[INFO] 10.244.1.2:43441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076976s
	[INFO] 10.244.1.2:34726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138048s
	[INFO] 10.244.1.2:45607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112925s
	[INFO] 10.244.2.2:40744 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001749217s
	[INFO] 10.244.2.2:53029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111621s
	[INFO] 10.244.2.2:40938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014131s
	[INFO] 10.244.2.2:56391 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130828s
	[INFO] 10.244.1.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015755s
	[INFO] 10.244.2.2:42534 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120056s
	[INFO] 10.244.2.2:54358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316425s
	[INFO] 10.244.0.4:60417 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238089s
	[INFO] 10.244.0.4:60483 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144782s
	
	
	==> coredns [4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3] <==
	[INFO] 10.244.1.2:50371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146692s
	[INFO] 10.244.1.2:40281 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179601s
	[INFO] 10.244.2.2:51591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262048s
	[INFO] 10.244.2.2:40024 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001651832s
	[INFO] 10.244.2.2:45470 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153125s
	[INFO] 10.244.2.2:44372 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161391s
	[INFO] 10.244.0.4:55323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007536s
	[INFO] 10.244.0.4:36522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010122s
	[INFO] 10.244.0.4:59910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068387s
	[INFO] 10.244.0.4:56467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053097s
	[INFO] 10.244.1.2:47288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107648s
	[INFO] 10.244.1.2:47476 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075973s
	[INFO] 10.244.1.2:33459 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186954s
	[INFO] 10.244.2.2:42752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177891s
	[INFO] 10.244.2.2:55553 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189177s
	[INFO] 10.244.0.4:39711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067897s
	[INFO] 10.244.0.4:46192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.002995771s
	[INFO] 10.244.1.2:52462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000332016s
	[INFO] 10.244.1.2:33081 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215617s
	[INFO] 10.244.1.2:48821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092021s
	[INFO] 10.244.1.2:39937 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000452168s
	[INFO] 10.244.2.2:43887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122925s
	[INFO] 10.244.2.2:38523 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093183s
	[INFO] 10.244.2.2:56286 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149396s
	[INFO] 10.244.2.2:33782 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081737s
	
	
	==> describe nodes <==
	Name:               ha-218762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:27:31 +0000   Tue, 19 Mar 2024 19:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-218762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee6305e340734ffab00fb0013188dc6a
	  System UUID:                ee6305e3-4073-4ffa-b00f-b0013188dc6a
	  Boot ID:                    4a3c9f80-1526-4057-9e0e-fd3e10e41bd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d8xsk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 coredns-76f75df574-6f64w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 coredns-76f75df574-zlz9l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 etcd-ha-218762                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m31s
	  kube-system                 kindnet-d8pkw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m20s
	  kube-system                 kube-apiserver-ha-218762             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-controller-manager-ha-218762    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-proxy-qd8kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-scheduler-ha-218762             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-vip-ha-218762                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m39s (x7 over 7m39s)  kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m39s (x8 over 7m39s)  kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x8 over 7m39s)  kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m20s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal  NodeReady                7m16s                  kubelet          Node ha-218762 status is now: NodeReady
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	
	
	Name:               ha-218762-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:25:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:28:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 19 Mar 2024 19:27:33 +0000   Tue, 19 Mar 2024 19:28:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-218762-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21ee6ca9760341f0b88147e7d26bc5a4
	  System UUID:                21ee6ca9-7603-41f0-b881-47e7d26bc5a4
	  Boot ID:                    d29cfd35-9738-4ec3-bdfa-fd53b9a80f75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ds2kh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-218762-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kindnet-4b7jg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m56s
	  kube-system                 kube-apiserver-ha-218762-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-controller-manager-ha-218762-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kube-system                 kube-proxy-9q4nx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-scheduler-ha-218762-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-vip-ha-218762-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m57s)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m57s)  kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m57s)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m55s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-218762-m02 status is now: NodeNotReady
	
	
	Name:               ha-218762-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_26_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:26:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:27:11 +0000   Tue, 19 Mar 2024 19:26:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-218762-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc67d42b66264826a0e5dce81a989b48
	  System UUID:                cc67d42b-6626-4826-a0e5-dce81a989b48
	  Boot ID:                    f8b7dffa-c338-4457-9479-0c1c4ffa0bcd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-qrc54                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-218762-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 kindnet-wv72v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m47s
	  kube-system                 kube-apiserver-ha-218762-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-controller-manager-ha-218762-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-lq48k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-scheduler-ha-218762-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-vip-ha-218762-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node ha-218762-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m45s                  node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	
	
	Name:               ha-218762-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_27_38_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:31:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:28:08 +0000   Tue, 19 Mar 2024 19:27:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-218762-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3252307468a44b83a5ab5199d03a0035
	  System UUID:                32523074-68a4-4b83-a5ab-5199d03a0035
	  Boot ID:                    e02289dd-a17d-490f-93ec-aa5804396da3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hslwj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-proxy-nth69    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m51s (x2 over 3m51s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x2 over 3m51s)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x2 over 3m51s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal  NodeReady                3m40s                  kubelet          Node ha-218762-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar19 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052973] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.586107] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.313943] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.668535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.074231] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.062282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064060] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.205706] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.113821] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.284359] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.977018] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.063791] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.785726] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.566086] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.304560] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.098669] kauditd_printk_skb: 51 callbacks suppressed
	[Mar19 19:24] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 19:25] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [dc37df944702003608d704925db1515b753c461128e874e10764393af312326c] <==
	{"level":"warn","ts":"2024-03-19T19:31:27.857219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.864234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.871293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.882013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.887719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.891187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.900493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.904313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.907734Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.91651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.928923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.939931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.944213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.948456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.957652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.966212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.967868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.976689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.980165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.984318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.989228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.991347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:27.998463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:28.005399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:31:28.02004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:28 up 8 min,  0 users,  load average: 0.24, 0.41, 0.24
	Linux ha-218762 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986] <==
	I0319 19:30:51.578680       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:31:01.586319       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:31:01.586374       1 main.go:227] handling current node
	I0319 19:31:01.586392       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:31:01.586398       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:31:01.586523       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:31:01.586556       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:31:01.586607       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:31:01.586638       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:31:11.600541       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:31:11.600587       1 main.go:227] handling current node
	I0319 19:31:11.600598       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:31:11.600604       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:31:11.600886       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:31:11.600993       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:31:11.601125       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:31:11.601132       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:31:21.617137       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:31:21.617282       1 main.go:227] handling current node
	I0319 19:31:21.617322       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:31:21.617351       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:31:21.617549       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:31:21.617591       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:31:21.617666       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:31:21.617686       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab] <==
	I0319 19:23:52.142480       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 19:23:52.143249       1 aggregator.go:165] initial CRD sync complete...
	I0319 19:23:52.143376       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 19:23:52.143403       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 19:23:52.143502       1 cache.go:39] Caches are synced for autoregister controller
	I0319 19:23:52.149116       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 19:23:52.936452       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0319 19:23:52.949762       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0319 19:23:52.949910       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 19:23:53.815468       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 19:23:53.863166       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 19:23:54.037144       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0319 19:23:54.044003       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200]
	I0319 19:23:54.045187       1 controller.go:624] quota admission added evaluator for: endpoints
	I0319 19:23:54.050161       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 19:23:54.081084       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 19:23:55.968360       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 19:23:55.986155       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0319 19:23:55.999202       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 19:24:07.687029       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0319 19:24:07.937286       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0319 19:27:40.030974       1 trace.go:236] Trace[1019634669]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0823910c-571a-4af2-9bc5-1a655210a684,client:192.168.39.161,api-group:,api-version:v1,name:kindnet-zwcq2,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-zwcq2,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:DELETE (19-Mar-2024 19:27:39.415) (total time: 615ms):
	Trace[1019634669]: ---"Object deleted from database" 543ms (19:27:40.030)
	Trace[1019634669]: [615.019116ms] [615.019116ms] END
	W0319 19:28:14.060350       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.200]
	
	
	==> kube-controller-manager [136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda] <==
	E0319 19:27:37.120296       1 certificate_controller.go:146] Sync csr-2kj9c failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2kj9c": the object has been modified; please apply your changes to the latest version and try again
	E0319 19:27:37.139424       1 certificate_controller.go:146] Sync csr-2kj9c failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2kj9c": the object has been modified; please apply your changes to the latest version and try again
	I0319 19:27:37.411270       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-218762-m04\" does not exist"
	I0319 19:27:37.459117       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nth69"
	I0319 19:27:37.470398       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l9pt2"
	I0319 19:27:37.474116       1 range_allocator.go:380] "Set node PodCIDR" node="ha-218762-m04" podCIDRs=["10.244.3.0/24"]
	I0319 19:27:37.587207       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-dnc6g"
	I0319 19:27:37.633970       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-l9pt2"
	I0319 19:27:37.707458       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-pq49n"
	I0319 19:27:37.707524       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-zwcq2"
	I0319 19:27:42.215692       1 event.go:376] "Event occurred" object="ha-218762-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller"
	I0319 19:27:42.234751       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-218762-m04"
	I0319 19:27:48.274225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-218762-m04"
	I0319 19:28:47.262273       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-218762-m04"
	I0319 19:28:47.263661       1 event.go:376] "Event occurred" object="ha-218762-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-218762-m02 status is now: NodeNotReady"
	I0319 19:28:47.290060       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.304247       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.318491       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ds2kh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.332874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.282269ms"
	I0319 19:28:47.333972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="93.39µs"
	I0319 19:28:47.340575       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-9q4nx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.366715       1 event.go:376] "Event occurred" object="kube-system/kindnet-4b7jg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.389968       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.415656       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:28:47.443169       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-218762-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6] <==
	I0319 19:24:09.933910       1 server_others.go:72] "Using iptables proxy"
	I0319 19:24:09.951054       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0319 19:24:10.000172       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:24:10.000241       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:24:10.000268       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:24:10.004117       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:24:10.004313       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:24:10.004496       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:24:10.006774       1 config.go:188] "Starting service config controller"
	I0319 19:24:10.007178       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:24:10.007235       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:24:10.007254       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:24:10.009090       1 config.go:315] "Starting node config controller"
	I0319 19:24:10.009130       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:24:10.107878       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 19:24:10.107985       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:24:10.109254       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a] <==
	W0319 19:23:53.290339       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:23:53.290396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:23:53.292438       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 19:23:53.292503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 19:23:53.301423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 19:23:53.301472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:23:53.342779       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 19:23:53.342921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 19:23:53.348707       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 19:23:53.348928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 19:23:53.366723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:23:53.366845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:23:53.460916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 19:23:53.460994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 19:23:53.500052       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 19:23:53.500112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 19:23:53.570185       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:23:53.570249       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 19:23:55.700059       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0319 19:27:37.506188       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l9pt2\": pod kindnet-l9pt2 is already assigned to node \"ha-218762-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l9pt2" node="ha-218762-m04"
	E0319 19:27:37.506418       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l9pt2\": pod kindnet-l9pt2 is already assigned to node \"ha-218762-m04\"" pod="kube-system/kindnet-l9pt2"
	E0319 19:27:37.546108       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dnc6g\": pod kube-proxy-dnc6g is already assigned to node \"ha-218762-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dnc6g" node="ha-218762-m04"
	E0319 19:27:37.546222       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 9aab85ee-ad94-4703-864a-11c1720eb35f(kube-system/kube-proxy-dnc6g) wasn't assumed so cannot be forgotten"
	E0319 19:27:37.546306       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dnc6g\": pod kube-proxy-dnc6g is already assigned to node \"ha-218762-m04\"" pod="kube-system/kube-proxy-dnc6g"
	I0319 19:27:37.546391       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dnc6g" node="ha-218762-m04"
	
	
	==> kubelet <==
	Mar 19 19:26:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:26:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:26:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:26:59 ha-218762 kubelet[1386]: I0319 19:26:59.828973    1386 topology_manager.go:215] "Topology Admit Handler" podUID="6f5b6f71-8881-4429-a25f-ca62fef2f65c" podNamespace="default" podName="busybox-7fdf7869d9-d8xsk"
	Mar 19 19:26:59 ha-218762 kubelet[1386]: I0319 19:26:59.885065    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689s2\" (UniqueName: \"kubernetes.io/projected/6f5b6f71-8881-4429-a25f-ca62fef2f65c-kube-api-access-689s2\") pod \"busybox-7fdf7869d9-d8xsk\" (UID: \"6f5b6f71-8881-4429-a25f-ca62fef2f65c\") " pod="default/busybox-7fdf7869d9-d8xsk"
	Mar 19 19:27:56 ha-218762 kubelet[1386]: E0319 19:27:56.166920    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:27:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:27:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:27:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:27:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:28:56 ha-218762 kubelet[1386]: E0319 19:28:56.171252    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:28:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:28:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:28:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:28:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:29:56 ha-218762 kubelet[1386]: E0319 19:29:56.168413    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:29:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:29:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:29:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:29:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:30:56 ha-218762 kubelet[1386]: E0319 19:30:56.166358    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:30:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:30:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:30:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:30:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-218762 -n ha-218762
helpers_test.go:261: (dbg) Run:  kubectl --context ha-218762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (427.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-218762 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-218762 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-218762 -v=7 --alsologtostderr: exit status 82 (2m2.69699037s)

                                                
                                                
-- stdout --
	* Stopping node "ha-218762-m04"  ...
	* Stopping node "ha-218762-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:31:29.581582   32705 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:31:29.581827   32705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:29.581837   32705 out.go:304] Setting ErrFile to fd 2...
	I0319 19:31:29.581844   32705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:31:29.582025   32705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:31:29.582262   32705 out.go:298] Setting JSON to false
	I0319 19:31:29.582358   32705 mustload.go:65] Loading cluster: ha-218762
	I0319 19:31:29.582716   32705 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:31:29.582821   32705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:31:29.583005   32705 mustload.go:65] Loading cluster: ha-218762
	I0319 19:31:29.583162   32705 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:31:29.583197   32705 stop.go:39] StopHost: ha-218762-m04
	I0319 19:31:29.583589   32705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:29.583649   32705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:29.598760   32705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0319 19:31:29.599216   32705 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:29.599746   32705 main.go:141] libmachine: Using API Version  1
	I0319 19:31:29.599769   32705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:29.600120   32705 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:29.602742   32705 out.go:177] * Stopping node "ha-218762-m04"  ...
	I0319 19:31:29.604039   32705 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 19:31:29.604062   32705 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:31:29.604329   32705 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 19:31:29.604359   32705 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:31:29.606934   32705 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:29.607320   32705 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:27:24 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:31:29.607354   32705 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:31:29.607506   32705 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:31:29.607675   32705 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:31:29.607827   32705 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:31:29.607982   32705 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:31:29.695391   32705 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 19:31:29.749904   32705 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 19:31:29.804597   32705 main.go:141] libmachine: Stopping "ha-218762-m04"...
	I0319 19:31:29.804652   32705 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:31:29.806191   32705 main.go:141] libmachine: (ha-218762-m04) Calling .Stop
	I0319 19:31:29.809422   32705 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 0/120
	I0319 19:31:30.811472   32705 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 1/120
	I0319 19:31:31.813376   32705 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:31:31.814777   32705 main.go:141] libmachine: Machine "ha-218762-m04" was stopped.
	I0319 19:31:31.814800   32705 stop.go:75] duration metric: took 2.210762365s to stop
	I0319 19:31:31.814833   32705 stop.go:39] StopHost: ha-218762-m03
	I0319 19:31:31.815311   32705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:31:31.815394   32705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:31:31.829950   32705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0319 19:31:31.830331   32705 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:31:31.830799   32705 main.go:141] libmachine: Using API Version  1
	I0319 19:31:31.830824   32705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:31:31.831162   32705 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:31:31.832932   32705 out.go:177] * Stopping node "ha-218762-m03"  ...
	I0319 19:31:31.834209   32705 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 19:31:31.834240   32705 main.go:141] libmachine: (ha-218762-m03) Calling .DriverName
	I0319 19:31:31.834442   32705 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 19:31:31.834461   32705 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHHostname
	I0319 19:31:31.837363   32705 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:31.837759   32705 main.go:141] libmachine: (ha-218762-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:34:f4", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:26:05 +0000 UTC Type:0 Mac:52:54:00:13:34:f4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-218762-m03 Clientid:01:52:54:00:13:34:f4}
	I0319 19:31:31.837783   32705 main.go:141] libmachine: (ha-218762-m03) DBG | domain ha-218762-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:13:34:f4 in network mk-ha-218762
	I0319 19:31:31.837910   32705 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHPort
	I0319 19:31:31.838064   32705 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHKeyPath
	I0319 19:31:31.838200   32705 main.go:141] libmachine: (ha-218762-m03) Calling .GetSSHUsername
	I0319 19:31:31.838338   32705 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m03/id_rsa Username:docker}
	I0319 19:31:31.930886   32705 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 19:31:31.985763   32705 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 19:31:32.039866   32705 main.go:141] libmachine: Stopping "ha-218762-m03"...
	I0319 19:31:32.039892   32705 main.go:141] libmachine: (ha-218762-m03) Calling .GetState
	I0319 19:31:32.041453   32705 main.go:141] libmachine: (ha-218762-m03) Calling .Stop
	I0319 19:31:32.044746   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 0/120
	I0319 19:31:33.045962   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 1/120
	I0319 19:31:34.047249   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 2/120
	I0319 19:31:35.048496   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 3/120
	I0319 19:31:36.050640   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 4/120
	I0319 19:31:37.052363   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 5/120
	I0319 19:31:38.053811   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 6/120
	I0319 19:31:39.055200   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 7/120
	I0319 19:31:40.056679   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 8/120
	I0319 19:31:41.057920   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 9/120
	I0319 19:31:42.059709   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 10/120
	I0319 19:31:43.061065   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 11/120
	I0319 19:31:44.062493   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 12/120
	I0319 19:31:45.063839   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 13/120
	I0319 19:31:46.065133   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 14/120
	I0319 19:31:47.067189   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 15/120
	I0319 19:31:48.068775   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 16/120
	I0319 19:31:49.070634   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 17/120
	I0319 19:31:50.071892   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 18/120
	I0319 19:31:51.073300   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 19/120
	I0319 19:31:52.074929   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 20/120
	I0319 19:31:53.076305   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 21/120
	I0319 19:31:54.077807   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 22/120
	I0319 19:31:55.079305   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 23/120
	I0319 19:31:56.080682   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 24/120
	I0319 19:31:57.082413   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 25/120
	I0319 19:31:58.083718   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 26/120
	I0319 19:31:59.085197   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 27/120
	I0319 19:32:00.086631   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 28/120
	I0319 19:32:01.088028   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 29/120
	I0319 19:32:02.089560   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 30/120
	I0319 19:32:03.090651   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 31/120
	I0319 19:32:04.092039   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 32/120
	I0319 19:32:05.093397   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 33/120
	I0319 19:32:06.094598   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 34/120
	I0319 19:32:07.096109   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 35/120
	I0319 19:32:08.097578   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 36/120
	I0319 19:32:09.098828   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 37/120
	I0319 19:32:10.100191   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 38/120
	I0319 19:32:11.101543   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 39/120
	I0319 19:32:12.103093   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 40/120
	I0319 19:32:13.104352   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 41/120
	I0319 19:32:14.105496   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 42/120
	I0319 19:32:15.106813   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 43/120
	I0319 19:32:16.108068   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 44/120
	I0319 19:32:17.109817   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 45/120
	I0319 19:32:18.111114   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 46/120
	I0319 19:32:19.112446   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 47/120
	I0319 19:32:20.113763   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 48/120
	I0319 19:32:21.114928   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 49/120
	I0319 19:32:22.116825   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 50/120
	I0319 19:32:23.118200   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 51/120
	I0319 19:32:24.119455   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 52/120
	I0319 19:32:25.120673   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 53/120
	I0319 19:32:26.122690   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 54/120
	I0319 19:32:27.124515   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 55/120
	I0319 19:32:28.125987   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 56/120
	I0319 19:32:29.127457   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 57/120
	I0319 19:32:30.129020   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 58/120
	I0319 19:32:31.130664   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 59/120
	I0319 19:32:32.132234   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 60/120
	I0319 19:32:33.133422   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 61/120
	I0319 19:32:34.134641   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 62/120
	I0319 19:32:35.135958   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 63/120
	I0319 19:32:36.137235   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 64/120
	I0319 19:32:37.138566   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 65/120
	I0319 19:32:38.139867   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 66/120
	I0319 19:32:39.141440   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 67/120
	I0319 19:32:40.142631   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 68/120
	I0319 19:32:41.143996   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 69/120
	I0319 19:32:42.145798   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 70/120
	I0319 19:32:43.146996   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 71/120
	I0319 19:32:44.148301   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 72/120
	I0319 19:32:45.149557   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 73/120
	I0319 19:32:46.150840   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 74/120
	I0319 19:32:47.152341   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 75/120
	I0319 19:32:48.153634   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 76/120
	I0319 19:32:49.154936   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 77/120
	I0319 19:32:50.156282   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 78/120
	I0319 19:32:51.157961   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 79/120
	I0319 19:32:52.159675   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 80/120
	I0319 19:32:53.160936   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 81/120
	I0319 19:32:54.162432   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 82/120
	I0319 19:32:55.163769   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 83/120
	I0319 19:32:56.165225   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 84/120
	I0319 19:32:57.166974   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 85/120
	I0319 19:32:58.168324   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 86/120
	I0319 19:32:59.169701   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 87/120
	I0319 19:33:00.170940   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 88/120
	I0319 19:33:01.172155   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 89/120
	I0319 19:33:02.173571   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 90/120
	I0319 19:33:03.174976   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 91/120
	I0319 19:33:04.176178   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 92/120
	I0319 19:33:05.177268   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 93/120
	I0319 19:33:06.178557   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 94/120
	I0319 19:33:07.180110   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 95/120
	I0319 19:33:08.181615   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 96/120
	I0319 19:33:09.182833   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 97/120
	I0319 19:33:10.184245   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 98/120
	I0319 19:33:11.185451   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 99/120
	I0319 19:33:12.187000   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 100/120
	I0319 19:33:13.188395   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 101/120
	I0319 19:33:14.189626   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 102/120
	I0319 19:33:15.191095   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 103/120
	I0319 19:33:16.192408   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 104/120
	I0319 19:33:17.194189   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 105/120
	I0319 19:33:18.195631   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 106/120
	I0319 19:33:19.197089   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 107/120
	I0319 19:33:20.198325   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 108/120
	I0319 19:33:21.199650   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 109/120
	I0319 19:33:22.201484   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 110/120
	I0319 19:33:23.202914   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 111/120
	I0319 19:33:24.204347   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 112/120
	I0319 19:33:25.205596   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 113/120
	I0319 19:33:26.206977   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 114/120
	I0319 19:33:27.208872   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 115/120
	I0319 19:33:28.210207   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 116/120
	I0319 19:33:29.211582   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 117/120
	I0319 19:33:30.212895   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 118/120
	I0319 19:33:31.214756   32705 main.go:141] libmachine: (ha-218762-m03) Waiting for machine to stop 119/120
	I0319 19:33:32.215350   32705 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0319 19:33:32.215419   32705 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0319 19:33:32.217559   32705 out.go:177] 
	W0319 19:33:32.218954   32705 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0319 19:33:32.218970   32705 out.go:239] * 
	* 
	W0319 19:33:32.221325   32705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 19:33:32.222896   32705 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-218762 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-218762 --wait=true -v=7 --alsologtostderr
E0319 19:34:30.843755   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:35:04.835089   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:36:27.881051   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-218762 --wait=true -v=7 --alsologtostderr: (5m1.459861196s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-218762
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-218762 -n ha-218762
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-218762 logs -n 25: (2.168966632s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m04 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp testdata/cp-test.txt                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m04_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03:/home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m03 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-218762 node stop m02 -v=7                                                     | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-218762 node start m02 -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-218762 -v=7                                                           | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-218762 -v=7                                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-218762 --wait=true -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:33 UTC | 19 Mar 24 19:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-218762                                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:33:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:33:32.286609   33042 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:33:32.286742   33042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:33:32.286756   33042 out.go:304] Setting ErrFile to fd 2...
	I0319 19:33:32.286763   33042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:33:32.286981   33042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:33:32.287531   33042 out.go:298] Setting JSON to false
	I0319 19:33:32.288454   33042 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4510,"bootTime":1710872302,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:33:32.288513   33042 start.go:139] virtualization: kvm guest
	I0319 19:33:32.290964   33042 out.go:177] * [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:33:32.292530   33042 notify.go:220] Checking for updates...
	I0319 19:33:32.292542   33042 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:33:32.294147   33042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:33:32.295577   33042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:33:32.296847   33042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:33:32.298103   33042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:33:32.299357   33042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:33:32.301120   33042 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:33:32.301217   33042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:33:32.301581   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:33:32.301616   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:33:32.316174   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0319 19:33:32.316528   33042 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:33:32.317002   33042 main.go:141] libmachine: Using API Version  1
	I0319 19:33:32.317022   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:33:32.317362   33042 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:33:32.317555   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:33:32.351520   33042 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 19:33:32.352881   33042 start.go:297] selected driver: kvm2
	I0319 19:33:32.352896   33042 start.go:901] validating driver "kvm2" against &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:33:32.353030   33042 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:33:32.353317   33042 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:33:32.353382   33042 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:33:32.367458   33042 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:33:32.368093   33042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:33:32.368167   33042 cni.go:84] Creating CNI manager for ""
	I0319 19:33:32.368181   33042 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0319 19:33:32.368224   33042 start.go:340] cluster config:
	{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:33:32.368376   33042 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:33:32.371078   33042 out.go:177] * Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	I0319 19:33:32.372422   33042 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:33:32.372463   33042 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:33:32.372475   33042 cache.go:56] Caching tarball of preloaded images
	I0319 19:33:32.372560   33042 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:33:32.372572   33042 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:33:32.372677   33042 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:33:32.372859   33042 start.go:360] acquireMachinesLock for ha-218762: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:33:32.372909   33042 start.go:364] duration metric: took 20.811µs to acquireMachinesLock for "ha-218762"
	I0319 19:33:32.372922   33042 start.go:96] Skipping create...Using existing machine configuration
	I0319 19:33:32.372929   33042 fix.go:54] fixHost starting: 
	I0319 19:33:32.373168   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:33:32.373198   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:33:32.386661   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0319 19:33:32.387055   33042 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:33:32.387516   33042 main.go:141] libmachine: Using API Version  1
	I0319 19:33:32.387540   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:33:32.387852   33042 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:33:32.388026   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:33:32.388164   33042 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:33:32.389659   33042 fix.go:112] recreateIfNeeded on ha-218762: state=Running err=<nil>
	W0319 19:33:32.389690   33042 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 19:33:32.391669   33042 out.go:177] * Updating the running kvm2 "ha-218762" VM ...
	I0319 19:33:32.392968   33042 machine.go:94] provisionDockerMachine start ...
	I0319 19:33:32.392983   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:33:32.393168   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.395460   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.395844   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.395869   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.395970   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.396139   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.396293   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.396436   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.396580   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:32.396804   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:32.396817   33042 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 19:33:32.509931   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:33:32.509964   33042 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:33:32.510226   33042 buildroot.go:166] provisioning hostname "ha-218762"
	I0319 19:33:32.510251   33042 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:33:32.510422   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.512993   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.513355   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.513380   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.513523   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.513704   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.513863   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.513958   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.514149   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:32.514360   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:32.514374   33042 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762 && echo "ha-218762" | sudo tee /etc/hostname
	I0319 19:33:32.644113   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:33:32.644141   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.647007   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.647413   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.647442   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.647742   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.647904   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.648055   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.648184   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.648334   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:32.648499   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:32.648515   33042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:33:32.757678   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:33:32.757708   33042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:33:32.757747   33042 buildroot.go:174] setting up certificates
	I0319 19:33:32.757759   33042 provision.go:84] configureAuth start
	I0319 19:33:32.757773   33042 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:33:32.758045   33042 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:33:32.760506   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.760819   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.760849   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.761049   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.763318   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.763714   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.763746   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.763882   33042 provision.go:143] copyHostCerts
	I0319 19:33:32.763928   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:33:32.763985   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:33:32.763998   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:33:32.764086   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:33:32.764273   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:33:32.764303   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:33:32.764313   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:33:32.764358   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:33:32.764459   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:33:32.764484   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:33:32.764494   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:33:32.764528   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:33:32.764614   33042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762 san=[127.0.0.1 192.168.39.200 ha-218762 localhost minikube]
	I0319 19:33:32.930565   33042 provision.go:177] copyRemoteCerts
	I0319 19:33:32.930618   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:33:32.930638   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.932945   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.933257   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.933277   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.933435   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.933624   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.933785   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.933923   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:33:33.022075   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:33:33.022166   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:33:33.051685   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:33:33.051760   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0319 19:33:33.081530   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:33:33.081584   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 19:33:33.110948   33042 provision.go:87] duration metric: took 353.177548ms to configureAuth
	I0319 19:33:33.110973   33042 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:33:33.111164   33042 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:33:33.111223   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:33.113603   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:33.114027   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:33.114047   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:33.114236   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:33.114413   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:33.114566   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:33.114678   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:33.114839   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:33.114996   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:33.115014   33042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:35:04.164021   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:35:04.164045   33042 machine.go:97] duration metric: took 1m31.771066251s to provisionDockerMachine
	I0319 19:35:04.164074   33042 start.go:293] postStartSetup for "ha-218762" (driver="kvm2")
	I0319 19:35:04.164103   33042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:35:04.164121   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.164484   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:35:04.164548   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.167437   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.167949   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.167990   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.168115   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.168312   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.168483   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.168623   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:35:04.256197   33042 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:35:04.261053   33042 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:35:04.261068   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:35:04.261129   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:35:04.261196   33042 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:35:04.261204   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:35:04.261281   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:35:04.271552   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:35:04.300278   33042 start.go:296] duration metric: took 136.191363ms for postStartSetup
	I0319 19:35:04.300316   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.300610   33042 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0319 19:35:04.300649   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.302797   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.303241   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.303269   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.303373   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.303552   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.303712   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.303856   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	W0319 19:35:04.392001   33042 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0319 19:35:04.392029   33042 fix.go:56] duration metric: took 1m32.019097317s for fixHost
	I0319 19:35:04.392064   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.394408   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.394733   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.394757   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.394947   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.395140   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.395324   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.395478   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.395622   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:35:04.395822   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:35:04.395836   33042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:35:04.505652   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876904.467368521
	
	I0319 19:35:04.505673   33042 fix.go:216] guest clock: 1710876904.467368521
	I0319 19:35:04.505683   33042 fix.go:229] Guest: 2024-03-19 19:35:04.467368521 +0000 UTC Remote: 2024-03-19 19:35:04.392037453 +0000 UTC m=+92.158762356 (delta=75.331068ms)
	I0319 19:35:04.505712   33042 fix.go:200] guest clock delta is within tolerance: 75.331068ms
	I0319 19:35:04.505717   33042 start.go:83] releasing machines lock for "ha-218762", held for 1m32.132800309s
	I0319 19:35:04.505734   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.505961   33042 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:35:04.508564   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.508939   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.508962   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.509149   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.509814   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.509977   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.510065   33042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:35:04.510109   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.510155   33042 ssh_runner.go:195] Run: cat /version.json
	I0319 19:35:04.510181   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.512778   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513063   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513143   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.513168   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513271   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.513449   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.513470   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.513504   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513627   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.513664   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.513801   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:35:04.513818   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.513942   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.514088   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:35:04.594396   33042 ssh_runner.go:195] Run: systemctl --version
	I0319 19:35:04.618125   33042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:35:04.787226   33042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:35:04.797150   33042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:35:04.797208   33042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:35:04.808526   33042 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 19:35:04.808550   33042 start.go:494] detecting cgroup driver to use...
	I0319 19:35:04.808622   33042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:35:04.829793   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:35:04.845583   33042 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:35:04.845628   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:35:04.862206   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:35:04.906124   33042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:35:05.069464   33042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:35:05.228357   33042 docker.go:233] disabling docker service ...
	I0319 19:35:05.228418   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:35:05.247557   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:35:05.263674   33042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:35:05.432853   33042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:35:05.607349   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:35:05.623332   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:35:05.645123   33042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:35:05.645195   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.658231   33042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:35:05.658287   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.671663   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.685693   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.697626   33042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:35:05.709979   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.722660   33042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.734084   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.746161   33042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:35:05.756224   33042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:35:05.766634   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:35:05.914317   33042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:35:07.762548   33042 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.848201785s)
	I0319 19:35:07.762575   33042 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:35:07.762626   33042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:35:07.769716   33042 start.go:562] Will wait 60s for crictl version
	I0319 19:35:07.769808   33042 ssh_runner.go:195] Run: which crictl
	I0319 19:35:07.774555   33042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:35:07.829114   33042 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:35:07.829189   33042 ssh_runner.go:195] Run: crio --version
	I0319 19:35:07.861271   33042 ssh_runner.go:195] Run: crio --version
	I0319 19:35:07.896563   33042 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:35:07.897907   33042 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:35:07.900412   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:07.900801   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:07.900831   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:07.901046   33042 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:35:07.907304   33042 kubeadm.go:877] updating cluster {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:35:07.907605   33042 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:35:07.907675   33042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:35:07.952307   33042 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:35:07.952333   33042 crio.go:433] Images already preloaded, skipping extraction
	I0319 19:35:07.952403   33042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:35:07.990516   33042 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:35:07.990538   33042 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:35:07.990547   33042 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.29.3 crio true true} ...
	I0319 19:35:07.990634   33042 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:35:07.990693   33042 ssh_runner.go:195] Run: crio config
	I0319 19:35:08.048605   33042 cni.go:84] Creating CNI manager for ""
	I0319 19:35:08.048629   33042 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0319 19:35:08.048643   33042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:35:08.048672   33042 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-218762 NodeName:ha-218762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:35:08.048855   33042 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-218762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:35:08.048883   33042 kube-vip.go:111] generating kube-vip config ...
	I0319 19:35:08.048936   33042 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:35:08.062255   33042 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:35:08.062371   33042 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:35:08.062452   33042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:35:08.073781   33042 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:35:08.073854   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0319 19:35:08.085167   33042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0319 19:35:08.105200   33042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:35:08.124185   33042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 19:35:08.143856   33042 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:35:08.162255   33042 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:35:08.167529   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:35:08.316341   33042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:35:08.334164   33042 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.200
	I0319 19:35:08.334190   33042 certs.go:194] generating shared ca certs ...
	I0319 19:35:08.334209   33042 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:35:08.334405   33042 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:35:08.334458   33042 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:35:08.334471   33042 certs.go:256] generating profile certs ...
	I0319 19:35:08.334595   33042 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:35:08.334636   33042 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188
	I0319 19:35:08.334653   33042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.15 192.168.39.254]
	I0319 19:35:08.426565   33042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188 ...
	I0319 19:35:08.426593   33042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188: {Name:mkc6ecf9faceb5a51d2be70a6f76e2e5b034bbc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:35:08.426761   33042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188 ...
	I0319 19:35:08.426773   33042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188: {Name:mk90ae9d7217424d4e02d14fe627f22b3debef47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:35:08.426843   33042 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:35:08.426974   33042 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:35:08.427093   33042 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:35:08.427108   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:35:08.427122   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:35:08.427136   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:35:08.427149   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:35:08.427161   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:35:08.427171   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:35:08.427188   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:35:08.427199   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:35:08.427253   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:35:08.427278   33042 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:35:08.427289   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:35:08.427311   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:35:08.427332   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:35:08.427353   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:35:08.427387   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:35:08.427414   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.427428   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.427440   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.428021   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:35:08.456237   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:35:08.482884   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:35:08.508928   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:35:08.535736   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 19:35:08.562934   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:35:08.590304   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:35:08.616243   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:35:08.643109   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:35:08.669254   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:35:08.695463   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:35:08.721538   33042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:35:08.739866   33042 ssh_runner.go:195] Run: openssl version
	I0319 19:35:08.747769   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:35:08.767018   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.772617   33042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.772677   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.779620   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:35:08.790459   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:35:08.802550   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.807717   33042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.807772   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.814137   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:35:08.824297   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:35:08.835732   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.840938   33042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.840992   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.847209   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:35:08.857432   33042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:35:08.862515   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 19:35:08.868692   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 19:35:08.874718   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 19:35:08.881188   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 19:35:08.887064   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 19:35:08.892726   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 19:35:08.898477   33042 kubeadm.go:391] StartCluster: {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:35:08.898583   33042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:35:08.898616   33042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:35:08.949323   33042 cri.go:89] found id: "a1d65870cde5260ce0500ccb71289c0f86b6801fac75a0e0fa9f049e3c71f5af"
	I0319 19:35:08.949341   33042 cri.go:89] found id: "d1de004c738171c7323ca701e5d648369cd503aa9cd2d5e8906959b5fcce539f"
	I0319 19:35:08.949344   33042 cri.go:89] found id: "9f2ba9b095fcc8dd64cb30696eb6e7a3126bf13e385800dd50e3866f7118578f"
	I0319 19:35:08.949347   33042 cri.go:89] found id: "488c85cb47be84f1519718650c1207789e6ff34e6a6d8dbdcae93c151a17f3ae"
	I0319 19:35:08.949350   33042 cri.go:89] found id: "109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57"
	I0319 19:35:08.949353   33042 cri.go:89] found id: "4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3"
	I0319 19:35:08.949356   33042 cri.go:89] found id: "49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851"
	I0319 19:35:08.949358   33042 cri.go:89] found id: "ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986"
	I0319 19:35:08.949360   33042 cri.go:89] found id: "ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6"
	I0319 19:35:08.949365   33042 cri.go:89] found id: "da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8"
	I0319 19:35:08.949368   33042 cri.go:89] found id: "dc37df944702003608d704925db1515b753c461128e874e10764393af312326c"
	I0319 19:35:08.949370   33042 cri.go:89] found id: "136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda"
	I0319 19:35:08.949376   33042 cri.go:89] found id: "82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab"
	I0319 19:35:08.949378   33042 cri.go:89] found id: "b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a"
	I0319 19:35:08.949383   33042 cri.go:89] found id: ""
	I0319 19:35:08.949419   33042 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.552481646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877114551756849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f938269-b542-40d9-8549-1ae96d80399c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.554183569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a411b27-60cf-421f-975b-a5bc9a47a698 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.554264082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a411b27-60cf-421f-975b-a5bc9a47a698 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.555480355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a411b27-60cf-421f-975b-a5bc9a47a698 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.633959152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3701e039-412f-4608-b695-28b3355b00f6 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.634037746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3701e039-412f-4608-b695-28b3355b00f6 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.636378156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=249ddc82-35fe-4a3e-970f-d33952ef347f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.637182719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877114637138226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=249ddc82-35fe-4a3e-970f-d33952ef347f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.638308337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44c87351-4407-4dd0-a591-82e42580ec9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.638622532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44c87351-4407-4dd0-a591-82e42580ec9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.640513786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44c87351-4407-4dd0-a591-82e42580ec9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.702208246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59501367-6c4c-406e-99c5-141d26b79964 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.702313040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59501367-6c4c-406e-99c5-141d26b79964 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.704232558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a41429e-540f-40a4-ab4c-578ea7ad014a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.705032967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877114704995702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a41429e-540f-40a4-ab4c-578ea7ad014a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.706077792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0db1aaa8-3a98-448e-97c3-cc6c6f398406 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.706194307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0db1aaa8-3a98-448e-97c3-cc6c6f398406 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.707532388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0db1aaa8-3a98-448e-97c3-cc6c6f398406 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.770983948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12f78a2b-64d4-456e-9002-5bd5d04256a5 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.771068797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12f78a2b-64d4-456e-9002-5bd5d04256a5 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.772390610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60cc64b7-b668-4ef5-8568-5e235c763cb8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.772939100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877114772915407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60cc64b7-b668-4ef5-8568-5e235c763cb8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.773500888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1ad6557-e44c-41d3-a4f6-0411b16693ef name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.773591242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1ad6557-e44c-41d3-a4f6-0411b16693ef name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:38:34 ha-218762 crio[3796]: time="2024-03-19 19:38:34.774114056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1ad6557-e44c-41d3-a4f6-0411b16693ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	57b67e1d9f711       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   9e1751c3a1b96       storage-provisioner
	b97e6744af918       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Running             kindnet-cni               3                   9f5d0382c34c1       kindnet-d8pkw
	cd231cd9e49b3       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Running             kube-controller-manager   2                   a8ecc5bc666eb       kube-controller-manager-ha-218762
	6338f56543282       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Running             kube-apiserver            3                   592738c55d5d7       kube-apiserver-ha-218762
	e184092c1753d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   9ae1282eca7fd       busybox-7fdf7869d9-d8xsk
	b3ac103d077b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   9e1751c3a1b96       storage-provisioner
	29b54ac96c6cb       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      3 minutes ago        Running             kube-vip                  0                   241791cae01a3       kube-vip-ha-218762
	3759bb815b0bd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   8b012633323a1       coredns-76f75df574-6f64w
	7fe718f015a06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   b66ed00d03541       coredns-76f75df574-zlz9l
	e64d30502df53       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago        Running             kube-proxy                1                   a0b75df1436e1       kube-proxy-qd8kk
	8edf1240fc777       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               2                   9f5d0382c34c1       kindnet-d8pkw
	e004ed7f983d2       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago        Running             kube-scheduler            1                   3ee688cdd562c       kube-scheduler-ha-218762
	d744b8d4b2141       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago        Exited              kube-apiserver            2                   592738c55d5d7       kube-apiserver-ha-218762
	89ce3ef06f55e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   c1a4e502ec750       etcd-ha-218762
	76c29ad320500       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago        Exited              kube-controller-manager   1                   a8ecc5bc666eb       kube-controller-manager-ha-218762
	5d5224aff0311       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   03d5a8bf10dee       busybox-7fdf7869d9-d8xsk
	109c2437b7712       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   42b1b389a8129       coredns-76f75df574-6f64w
	4c1e36efc888a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   9e44b306f2e4f       coredns-76f75df574-zlz9l
	ab7b5d52d6006       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      14 minutes ago       Exited              kube-proxy                0                   c02a60ba78138       kube-proxy-qd8kk
	dc37df9447020       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   59a484b792912       etcd-ha-218762
	b8f592d52269d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      14 minutes ago       Exited              kube-scheduler            0                   ffe45f05ed53a       kube-scheduler-ha-218762
	
	
	==> coredns [109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57] <==
	[INFO] 10.244.0.4:33585 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003760519s
	[INFO] 10.244.0.4:59082 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137291s
	[INFO] 10.244.0.4:40935 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118623s
	[INFO] 10.244.0.4:47943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107248s
	[INFO] 10.244.0.4:59058 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076766s
	[INFO] 10.244.1.2:50311 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001848487s
	[INFO] 10.244.1.2:43198 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174765s
	[INFO] 10.244.1.2:52346 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001415553s
	[INFO] 10.244.1.2:43441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076976s
	[INFO] 10.244.1.2:34726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138048s
	[INFO] 10.244.1.2:45607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112925s
	[INFO] 10.244.2.2:40744 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001749217s
	[INFO] 10.244.2.2:53029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111621s
	[INFO] 10.244.2.2:40938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014131s
	[INFO] 10.244.2.2:56391 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130828s
	[INFO] 10.244.1.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015755s
	[INFO] 10.244.2.2:42534 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120056s
	[INFO] 10.244.2.2:54358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316425s
	[INFO] 10.244.0.4:60417 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238089s
	[INFO] 10.244.0.4:60483 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144782s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3759bb815b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3] <==
	[INFO] 10.244.2.2:44372 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161391s
	[INFO] 10.244.0.4:55323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007536s
	[INFO] 10.244.0.4:36522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010122s
	[INFO] 10.244.0.4:59910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068387s
	[INFO] 10.244.0.4:56467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053097s
	[INFO] 10.244.1.2:47288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107648s
	[INFO] 10.244.1.2:47476 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075973s
	[INFO] 10.244.1.2:33459 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186954s
	[INFO] 10.244.2.2:42752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177891s
	[INFO] 10.244.2.2:55553 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189177s
	[INFO] 10.244.0.4:39711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067897s
	[INFO] 10.244.0.4:46192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.002995771s
	[INFO] 10.244.1.2:52462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000332016s
	[INFO] 10.244.1.2:33081 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215617s
	[INFO] 10.244.1.2:48821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092021s
	[INFO] 10.244.1.2:39937 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000452168s
	[INFO] 10.244.2.2:43887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122925s
	[INFO] 10.244.2.2:38523 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093183s
	[INFO] 10.244.2.2:56286 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149396s
	[INFO] 10.244.2.2:33782 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081737s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45534->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45534->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45532->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45532->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-218762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:38:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:36:00 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:36:00 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:36:00 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:36:00 +0000   Tue, 19 Mar 2024 19:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-218762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee6305e340734ffab00fb0013188dc6a
	  System UUID:                ee6305e3-4073-4ffa-b00f-b0013188dc6a
	  Boot ID:                    4a3c9f80-1526-4057-9e0e-fd3e10e41bd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d8xsk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-76f75df574-6f64w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-zlz9l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-218762                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-d8pkw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-218762             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-218762    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-qd8kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-218762             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-218762                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m33s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-218762 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Warning  ContainerGCFailed        3m39s (x2 over 4m39s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m29s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           2m22s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	
	
	Name:               ha-218762-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:25:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:38:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:36:41 +0000   Tue, 19 Mar 2024 19:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:36:41 +0000   Tue, 19 Mar 2024 19:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:36:41 +0000   Tue, 19 Mar 2024 19:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:36:41 +0000   Tue, 19 Mar 2024 19:36:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-218762-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21ee6ca9760341f0b88147e7d26bc5a4
	  System UUID:                21ee6ca9-7603-41f0-b881-47e7d26bc5a4
	  Boot ID:                    93ea4244-1402-4285-9999-90af84712cb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ds2kh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-218762-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-4b7jg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-218762-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-218762-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-9q4nx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-218762-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-218762-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m11s                kube-proxy       
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)    kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  NodeNotReady             9m48s                node-controller  Node ha-218762-m02 status is now: NodeNotReady
	  Normal  Starting                 3m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m3s)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m3s)  kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x7 over 3m3s)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m29s                node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           2m22s                node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           30s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	
	
	Name:               ha-218762-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_26_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:26:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:38:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:38:10 +0000   Tue, 19 Mar 2024 19:37:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:38:10 +0000   Tue, 19 Mar 2024 19:37:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:38:10 +0000   Tue, 19 Mar 2024 19:37:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:38:10 +0000   Tue, 19 Mar 2024 19:37:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-218762-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc67d42b66264826a0e5dce81a989b48
	  System UUID:                cc67d42b-6626-4826-a0e5-dce81a989b48
	  Boot ID:                    d4059426-4495-491c-b75b-cc18f886bfa0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-qrc54                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-218762-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-wv72v                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-218762-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-218762-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-lq48k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-218762-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-218762-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-218762-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal   RegisteredNode           2m29s              node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal   RegisteredNode           2m22s              node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	  Normal   NodeNotReady             109s               node-controller  Node ha-218762-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s (x3 over 56s)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x3 over 56s)  kubelet          Node ha-218762-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x3 over 56s)  kubelet          Node ha-218762-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s (x2 over 56s)  kubelet          Node ha-218762-m03 has been rebooted, boot id: d4059426-4495-491c-b75b-cc18f886bfa0
	  Normal   NodeReady                56s (x2 over 56s)  kubelet          Node ha-218762-m03 status is now: NodeReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-218762-m03 event: Registered Node ha-218762-m03 in Controller
	
	
	Name:               ha-218762-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_27_38_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:38:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:38:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-218762-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3252307468a44b83a5ab5199d03a0035
	  System UUID:                32523074-68a4-4b83-a5ab-5199d03a0035
	  Boot ID:                    a0d24f10-73b5-4b9e-ae00-6b857db48ab4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hslwj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-nth69    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-218762-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m29s              node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           2m22s              node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   NodeNotReady             109s               node-controller  Node ha-218762-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-218762-m04 has been rebooted, boot id: a0d24f10-73b5-4b9e-ae00-6b857db48ab4
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-218762-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.074231] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.062282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064060] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.205706] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.113821] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.284359] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.977018] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.063791] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.785726] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.566086] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.304560] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.098669] kauditd_printk_skb: 51 callbacks suppressed
	[Mar19 19:24] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 19:25] kauditd_printk_skb: 74 callbacks suppressed
	[Mar19 19:35] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	[  +0.163668] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +0.200453] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[  +0.176254] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.314534] systemd-fstab-generator[3782]: Ignoring "noauto" option for root device
	[  +2.399621] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +5.371303] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.623371] kauditd_printk_skb: 98 callbacks suppressed
	[ +37.084386] kauditd_printk_skb: 1 callbacks suppressed
	[Mar19 19:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.825614] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff] <==
	{"level":"warn","ts":"2024-03-19T19:37:33.292226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:37:33.392012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fe8c4457455e3a5","from":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-19T19:37:33.410255Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.15:2380/version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-19T19:37:33.410329Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-19T19:37:35.629855Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:35.631026Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:37.412064Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.15:2380/version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:37.412129Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:40.630423Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:40.631864Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:41.414175Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.15:2380/version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:41.414356Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:45.416171Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.15:2380/version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:45.416329Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c7942b8fd0a5905a","error":"Get \"https://192.168.39.15:2380/version\": dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:45.630976Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:45.632231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-19T19:37:47.874781Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:37:47.875003Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:37:47.881074Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:37:47.894464Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fe8c4457455e3a5","to":"c7942b8fd0a5905a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-19T19:37:47.894533Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:37:47.897974Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fe8c4457455e3a5","to":"c7942b8fd0a5905a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-19T19:37:47.898067Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:37:50.632277Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:50.632376Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	
	
	==> etcd [dc37df944702003608d704925db1515b753c461128e874e10764393af312326c] <==
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-19T19:33:33.327488Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T19:33:33.328117Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T19:33:33.329537Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fe8c4457455e3a5","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-19T19:33:33.330028Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330171Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330412Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330641Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330719Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330752Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330765Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330771Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.330779Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.330879Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.33095Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.330977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.331034Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.331072Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.333908Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-03-19T19:33:33.334129Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-03-19T19:33:33.334147Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-218762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	
	
	==> kernel <==
	 19:38:35 up 15 min,  0 users,  load average: 0.55, 0.53, 0.38
	Linux ha-218762 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db] <==
	I0319 19:35:15.073713       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0319 19:35:15.082151       1 main.go:107] hostIP = 192.168.39.200
	podIP = 192.168.39.200
	I0319 19:35:15.082344       1 main.go:116] setting mtu 1500 for CNI 
	I0319 19:35:15.082377       1 main.go:146] kindnetd IP family: "ipv4"
	I0319 19:35:15.082414       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0319 19:35:18.243500       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0319 19:35:28.251120       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0319 19:35:30.530434       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0319 19:35:33.602332       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0319 19:35:36.674326       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a] <==
	I0319 19:38:04.546061       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:38:14.554747       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:38:14.554859       1 main.go:227] handling current node
	I0319 19:38:14.554880       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:38:14.554888       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:38:14.554998       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:38:14.555003       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:38:14.555046       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:38:14.555081       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:38:24.563420       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:38:24.563478       1 main.go:227] handling current node
	I0319 19:38:24.563490       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:38:24.563496       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:38:24.563605       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:38:24.563638       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:38:24.563692       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:38:24.563726       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:38:34.590660       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:38:34.590879       1 main.go:227] handling current node
	I0319 19:38:34.590903       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:38:34.590915       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:38:34.591188       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I0319 19:38:34.591283       1 main.go:250] Node ha-218762-m03 has CIDR [10.244.2.0/24] 
	I0319 19:38:34.591412       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:38:34.591456       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99] <==
	I0319 19:36:00.650946       1 naming_controller.go:291] Starting NamingConditionController
	I0319 19:36:00.651624       1 establishing_controller.go:76] Starting EstablishingController
	I0319 19:36:00.651760       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0319 19:36:00.652296       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0319 19:36:00.653001       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0319 19:36:00.736338       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 19:36:00.737739       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 19:36:00.738270       1 aggregator.go:165] initial CRD sync complete...
	I0319 19:36:00.738310       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 19:36:00.738370       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 19:36:00.744583       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 19:36:00.793738       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 19:36:00.831228       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 19:36:00.831303       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0319 19:36:00.831427       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 19:36:00.833585       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 19:36:00.836493       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 19:36:00.839871       1 cache.go:39] Caches are synced for autoregister controller
	W0319 19:36:00.853478       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.234]
	I0319 19:36:00.857322       1 controller.go:624] quota admission added evaluator for: endpoints
	I0319 19:36:00.871921       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0319 19:36:00.875453       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0319 19:36:01.644844       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0319 19:36:02.305662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.200 192.168.39.234]
	W0319 19:36:12.303433       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200 192.168.39.234]
	
	
	==> kube-apiserver [d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc] <==
	I0319 19:35:15.124630       1 options.go:222] external host was not specified, using 192.168.39.200
	I0319 19:35:15.131991       1 server.go:148] Version: v1.29.3
	I0319 19:35:15.132389       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:35:15.750666       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0319 19:35:15.765707       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0319 19:35:15.765750       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0319 19:35:15.770127       1 instance.go:297] Using reconciler: lease
	W0319 19:35:35.749996       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0319 19:35:35.750099       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0319 19:35:35.771547       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	W0319 19:35:35.771547       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0] <==
	I0319 19:35:15.860675       1 serving.go:380] Generated self-signed cert in-memory
	I0319 19:35:16.293930       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0319 19:35:16.294052       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:35:16.296398       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 19:35:16.296604       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 19:35:16.297549       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 19:35:16.297633       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0319 19:35:36.778981       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.200:8443/healthz\": dial tcp 192.168.39.200:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca] <==
	I0319 19:36:13.982614       1 shared_informer.go:318] Caches are synced for cronjob
	I0319 19:36:14.001971       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 19:36:14.019934       1 shared_informer.go:318] Caches are synced for namespace
	I0319 19:36:14.041319       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 19:36:14.073868       1 shared_informer.go:318] Caches are synced for service account
	I0319 19:36:14.439007       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 19:36:14.450863       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 19:36:14.450915       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0319 19:36:20.663265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.142µs"
	I0319 19:36:27.100189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="59.272154ms"
	I0319 19:36:27.100300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="39.061µs"
	I0319 19:36:32.586349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.737085ms"
	I0319 19:36:32.586871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="124.851µs"
	I0319 19:36:32.611310       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-lv2ld EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-lv2ld\": the object has been modified; please apply your changes to the latest version and try again"
	I0319 19:36:32.611674       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e47508d7-7926-47c0-8a23-039e15feba7a", APIVersion:"v1", ResourceVersion:"282", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-lv2ld EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-lv2ld": the object has been modified; please apply your changes to the latest version and try again
	I0319 19:36:32.618073       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0319 19:36:46.361366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="16.154309ms"
	I0319 19:36:46.361530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.032µs"
	I0319 19:37:02.597069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="22.30136ms"
	I0319 19:37:02.599161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="153.734µs"
	I0319 19:37:40.580372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="85.653µs"
	I0319 19:37:41.450255       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-qrc54" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-qrc54"
	I0319 19:38:05.869092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.304103ms"
	I0319 19:38:05.869908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="284.344µs"
	I0319 19:38:27.113138       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-218762-m04"
	
	
	==> kube-proxy [ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6] <==
	E0319 19:32:28.965404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:32.035632       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:32.035877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:32.036002       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:32.036067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:35.108641       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:35.108872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:38.179557       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:38.180161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:38.180089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:38.180241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:41.252377       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:41.252670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:47.396133       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:47.396305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:50.467197       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:50.467719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:50.467919       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:50.467971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:33:08.899025       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:33:08.899222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:33:15.043609       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:33:15.043783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:33:18.114394       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:33:18.114516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec] <==
	I0319 19:35:16.231246       1 server_others.go:72] "Using iptables proxy"
	E0319 19:35:17.922736       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:20.994762       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:24.067129       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:30.212164       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:42.499409       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0319 19:36:01.210939       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0319 19:36:01.284898       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:36:01.284967       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:36:01.285005       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:36:01.289546       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:36:01.290085       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:36:01.290131       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:36:01.292784       1 config.go:188] "Starting service config controller"
	I0319 19:36:01.292917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:36:01.294451       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:36:01.294490       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:36:01.295482       1 config.go:315] "Starting node config controller"
	I0319 19:36:01.295520       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:36:01.394506       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:36:01.395963       1 shared_informer.go:318] Caches are synced for node config
	I0319 19:36:01.396039       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a] <==
	E0319 19:33:26.378038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:33:26.726044       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 19:33:26.726127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 19:33:26.748198       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 19:33:26.748273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 19:33:26.928301       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 19:33:26.928396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 19:33:26.976197       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:33:26.976221       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 19:33:27.006723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 19:33:27.006751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 19:33:27.093694       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 19:33:27.093728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 19:33:27.351454       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0319 19:33:27.351539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 19:33:27.352719       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 19:33:27.352776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 19:33:27.472941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 19:33:27.473141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 19:33:28.231106       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:33:28.231163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:33:28.321232       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 19:33:28.321317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0319 19:33:33.250655       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 19:33:33.262223       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	
	
	==> kube-scheduler [e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91] <==
	W0319 19:35:53.406313       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.200:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:53.406462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.200:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:55.070471       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.200:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:55.070579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.200:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:55.457722       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.200:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:55.458021       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.200:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:55.459458       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.200:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:55.459554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.200:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:56.245959       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:56.246001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:56.710434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:56.710481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:57.032096       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:57.032202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:57.643428       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:57.643496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.000457       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.000501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.179736       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.179931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.398246       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.398328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:36:00.710582       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 19:36:00.710646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0319 19:36:19.987053       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 19:35:57 ha-218762 kubelet[1386]: E0319 19:35:57.858764    1386 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-218762\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 19 19:35:58 ha-218762 kubelet[1386]: I0319 19:35:58.134129    1386 scope.go:117] "RemoveContainer" containerID="d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc"
	Mar 19 19:35:58 ha-218762 kubelet[1386]: I0319 19:35:58.136892    1386 scope.go:117] "RemoveContainer" containerID="76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0"
	Mar 19 19:36:03 ha-218762 kubelet[1386]: I0319 19:36:03.134503    1386 scope.go:117] "RemoveContainer" containerID="8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db"
	Mar 19 19:36:05 ha-218762 kubelet[1386]: I0319 19:36:05.134535    1386 scope.go:117] "RemoveContainer" containerID="b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c"
	Mar 19 19:36:05 ha-218762 kubelet[1386]: E0319 19:36:05.134898    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a496ada-aaf7-47a5-bd5d-5d909ef5df10)\"" pod="kube-system/storage-provisioner" podUID="6a496ada-aaf7-47a5-bd5d-5d909ef5df10"
	Mar 19 19:36:05 ha-218762 kubelet[1386]: I0319 19:36:05.301660    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-d8xsk" podStartSLOduration=543.209659725 podStartE2EDuration="9m6.301594389s" podCreationTimestamp="2024-03-19 19:26:59 +0000 UTC" firstStartedPulling="2024-03-19 19:27:00.479663984 +0000 UTC m=+184.535321021" lastFinishedPulling="2024-03-19 19:27:03.571598648 +0000 UTC m=+187.627255685" observedRunningTime="2024-03-19 19:27:04.093233973 +0000 UTC m=+188.148891022" watchObservedRunningTime="2024-03-19 19:36:05.301594389 +0000 UTC m=+729.357251428"
	Mar 19 19:36:17 ha-218762 kubelet[1386]: I0319 19:36:17.133637    1386 scope.go:117] "RemoveContainer" containerID="b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c"
	Mar 19 19:36:17 ha-218762 kubelet[1386]: E0319 19:36:17.134316    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a496ada-aaf7-47a5-bd5d-5d909ef5df10)\"" pod="kube-system/storage-provisioner" podUID="6a496ada-aaf7-47a5-bd5d-5d909ef5df10"
	Mar 19 19:36:28 ha-218762 kubelet[1386]: I0319 19:36:28.134215    1386 scope.go:117] "RemoveContainer" containerID="b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c"
	Mar 19 19:36:28 ha-218762 kubelet[1386]: E0319 19:36:28.134694    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a496ada-aaf7-47a5-bd5d-5d909ef5df10)\"" pod="kube-system/storage-provisioner" podUID="6a496ada-aaf7-47a5-bd5d-5d909ef5df10"
	Mar 19 19:36:39 ha-218762 kubelet[1386]: I0319 19:36:39.134351    1386 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-218762" podUID="d889098d-f271-4dcf-8dbc-e1cddbe35405"
	Mar 19 19:36:39 ha-218762 kubelet[1386]: I0319 19:36:39.159322    1386 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-218762"
	Mar 19 19:36:41 ha-218762 kubelet[1386]: I0319 19:36:41.134157    1386 scope.go:117] "RemoveContainer" containerID="b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c"
	Mar 19 19:36:56 ha-218762 kubelet[1386]: E0319 19:36:56.169667    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:36:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:36:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:36:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:36:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:37:02 ha-218762 kubelet[1386]: I0319 19:37:02.573520    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-218762" podStartSLOduration=23.573443292 podStartE2EDuration="23.573443292s" podCreationTimestamp="2024-03-19 19:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-19 19:36:42.214034831 +0000 UTC m=+766.269691857" watchObservedRunningTime="2024-03-19 19:37:02.573443292 +0000 UTC m=+786.629100339"
	Mar 19 19:37:56 ha-218762 kubelet[1386]: E0319 19:37:56.168290    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:37:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:37:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:37:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:37:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 19:38:34.171182   34300 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18453-10028/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-218762 -n ha-218762
helpers_test.go:261: (dbg) Run:  kubectl --context ha-218762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (427.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 stop -v=7 --alsologtostderr
E0319 19:39:30.843994   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:40:04.834343   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:40:53.887358   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 stop -v=7 --alsologtostderr: exit status 82 (2m0.485554794s)

                                                
                                                
-- stdout --
	* Stopping node "ha-218762-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:38:54.540013   34685 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:38:54.540150   34685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:38:54.540162   34685 out.go:304] Setting ErrFile to fd 2...
	I0319 19:38:54.540167   34685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:38:54.540465   34685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:38:54.540781   34685 out.go:298] Setting JSON to false
	I0319 19:38:54.540883   34685 mustload.go:65] Loading cluster: ha-218762
	I0319 19:38:54.541415   34685 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:38:54.541510   34685 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:38:54.541680   34685 mustload.go:65] Loading cluster: ha-218762
	I0319 19:38:54.541866   34685 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:38:54.541905   34685 stop.go:39] StopHost: ha-218762-m04
	I0319 19:38:54.542365   34685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:38:54.542438   34685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:38:54.557257   34685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0319 19:38:54.558049   34685 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:38:54.559253   34685 main.go:141] libmachine: Using API Version  1
	I0319 19:38:54.559279   34685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:38:54.559659   34685 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:38:54.561927   34685 out.go:177] * Stopping node "ha-218762-m04"  ...
	I0319 19:38:54.563148   34685 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 19:38:54.563185   34685 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:38:54.563403   34685 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 19:38:54.563423   34685 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:38:54.566028   34685 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:38:54.566421   34685 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:38:21 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:38:54.566462   34685 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:38:54.566581   34685 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:38:54.566749   34685 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:38:54.566908   34685 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:38:54.567010   34685 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	I0319 19:38:54.658510   34685 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 19:38:54.712965   34685 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 19:38:54.766429   34685 main.go:141] libmachine: Stopping "ha-218762-m04"...
	I0319 19:38:54.766468   34685 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:38:54.767998   34685 main.go:141] libmachine: (ha-218762-m04) Calling .Stop
	I0319 19:38:54.771516   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 0/120
	I0319 19:38:55.772862   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 1/120
	I0319 19:38:56.774700   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 2/120
	I0319 19:38:57.775931   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 3/120
	I0319 19:38:58.777402   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 4/120
	I0319 19:38:59.779363   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 5/120
	I0319 19:39:00.780907   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 6/120
	I0319 19:39:01.782527   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 7/120
	I0319 19:39:02.784039   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 8/120
	I0319 19:39:03.785278   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 9/120
	I0319 19:39:04.787595   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 10/120
	I0319 19:39:05.789785   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 11/120
	I0319 19:39:06.791232   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 12/120
	I0319 19:39:07.792555   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 13/120
	I0319 19:39:08.794696   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 14/120
	I0319 19:39:09.796495   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 15/120
	I0319 19:39:10.797939   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 16/120
	I0319 19:39:11.799598   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 17/120
	I0319 19:39:12.801431   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 18/120
	I0319 19:39:13.802672   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 19/120
	I0319 19:39:14.804872   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 20/120
	I0319 19:39:15.806843   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 21/120
	I0319 19:39:16.808219   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 22/120
	I0319 19:39:17.809665   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 23/120
	I0319 19:39:18.811028   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 24/120
	I0319 19:39:19.812321   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 25/120
	I0319 19:39:20.813670   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 26/120
	I0319 19:39:21.814906   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 27/120
	I0319 19:39:22.816350   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 28/120
	I0319 19:39:23.817651   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 29/120
	I0319 19:39:24.819707   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 30/120
	I0319 19:39:25.821566   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 31/120
	I0319 19:39:26.822754   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 32/120
	I0319 19:39:27.824029   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 33/120
	I0319 19:39:28.825779   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 34/120
	I0319 19:39:29.827365   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 35/120
	I0319 19:39:30.828660   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 36/120
	I0319 19:39:31.830691   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 37/120
	I0319 19:39:32.832301   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 38/120
	I0319 19:39:33.833509   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 39/120
	I0319 19:39:34.835600   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 40/120
	I0319 19:39:35.837245   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 41/120
	I0319 19:39:36.838649   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 42/120
	I0319 19:39:37.839971   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 43/120
	I0319 19:39:38.841341   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 44/120
	I0319 19:39:39.843094   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 45/120
	I0319 19:39:40.844469   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 46/120
	I0319 19:39:41.846777   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 47/120
	I0319 19:39:42.848156   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 48/120
	I0319 19:39:43.849633   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 49/120
	I0319 19:39:44.851635   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 50/120
	I0319 19:39:45.852902   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 51/120
	I0319 19:39:46.854613   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 52/120
	I0319 19:39:47.856327   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 53/120
	I0319 19:39:48.858544   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 54/120
	I0319 19:39:49.860617   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 55/120
	I0319 19:39:50.862798   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 56/120
	I0319 19:39:51.864676   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 57/120
	I0319 19:39:52.865991   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 58/120
	I0319 19:39:53.867501   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 59/120
	I0319 19:39:54.869457   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 60/120
	I0319 19:39:55.871044   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 61/120
	I0319 19:39:56.872725   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 62/120
	I0319 19:39:57.874596   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 63/120
	I0319 19:39:58.876433   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 64/120
	I0319 19:39:59.878382   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 65/120
	I0319 19:40:00.879907   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 66/120
	I0319 19:40:01.881307   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 67/120
	I0319 19:40:02.882596   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 68/120
	I0319 19:40:03.883961   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 69/120
	I0319 19:40:04.885353   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 70/120
	I0319 19:40:05.886750   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 71/120
	I0319 19:40:06.887994   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 72/120
	I0319 19:40:07.889409   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 73/120
	I0319 19:40:08.890923   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 74/120
	I0319 19:40:09.892755   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 75/120
	I0319 19:40:10.894119   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 76/120
	I0319 19:40:11.896200   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 77/120
	I0319 19:40:12.897802   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 78/120
	I0319 19:40:13.899197   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 79/120
	I0319 19:40:14.901235   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 80/120
	I0319 19:40:15.903011   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 81/120
	I0319 19:40:16.904625   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 82/120
	I0319 19:40:17.906075   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 83/120
	I0319 19:40:18.907537   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 84/120
	I0319 19:40:19.909163   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 85/120
	I0319 19:40:20.910652   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 86/120
	I0319 19:40:21.912070   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 87/120
	I0319 19:40:22.913278   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 88/120
	I0319 19:40:23.915456   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 89/120
	I0319 19:40:24.917380   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 90/120
	I0319 19:40:25.918624   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 91/120
	I0319 19:40:26.919921   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 92/120
	I0319 19:40:27.921486   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 93/120
	I0319 19:40:28.923052   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 94/120
	I0319 19:40:29.924476   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 95/120
	I0319 19:40:30.926808   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 96/120
	I0319 19:40:31.928826   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 97/120
	I0319 19:40:32.930726   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 98/120
	I0319 19:40:33.932340   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 99/120
	I0319 19:40:34.934234   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 100/120
	I0319 19:40:35.935604   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 101/120
	I0319 19:40:36.936928   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 102/120
	I0319 19:40:37.938250   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 103/120
	I0319 19:40:38.939480   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 104/120
	I0319 19:40:39.941103   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 105/120
	I0319 19:40:40.942471   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 106/120
	I0319 19:40:41.943764   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 107/120
	I0319 19:40:42.945021   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 108/120
	I0319 19:40:43.946795   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 109/120
	I0319 19:40:44.948460   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 110/120
	I0319 19:40:45.949814   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 111/120
	I0319 19:40:46.951187   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 112/120
	I0319 19:40:47.952630   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 113/120
	I0319 19:40:48.954772   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 114/120
	I0319 19:40:49.956388   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 115/120
	I0319 19:40:50.957547   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 116/120
	I0319 19:40:51.958901   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 117/120
	I0319 19:40:52.960211   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 118/120
	I0319 19:40:53.961397   34685 main.go:141] libmachine: (ha-218762-m04) Waiting for machine to stop 119/120
	I0319 19:40:54.962064   34685 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0319 19:40:54.962107   34685 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0319 19:40:54.963973   34685 out.go:177] 
	W0319 19:40:54.965331   34685 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0319 19:40:54.965381   34685 out.go:239] * 
	* 
	W0319 19:40:54.967484   34685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 19:40:54.968801   34685 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-218762 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr: exit status 3 (19.066644927s)

                                                
                                                
-- stdout --
	ha-218762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-218762-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:40:55.024378   35003 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:40:55.024529   35003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:40:55.024542   35003 out.go:304] Setting ErrFile to fd 2...
	I0319 19:40:55.024549   35003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:40:55.024783   35003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:40:55.024961   35003 out.go:298] Setting JSON to false
	I0319 19:40:55.024986   35003 mustload.go:65] Loading cluster: ha-218762
	I0319 19:40:55.025025   35003 notify.go:220] Checking for updates...
	I0319 19:40:55.025425   35003 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:40:55.025442   35003 status.go:255] checking status of ha-218762 ...
	I0319 19:40:55.025833   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.025913   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.042756   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0319 19:40:55.043170   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.043771   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.043791   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.044105   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.044307   35003 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:40:55.045983   35003 status.go:330] ha-218762 host status = "Running" (err=<nil>)
	I0319 19:40:55.045999   35003 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:40:55.046299   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.046337   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.060288   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0319 19:40:55.060688   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.061165   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.061192   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.061494   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.061672   35003 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:40:55.064317   35003 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:40:55.064731   35003 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:40:55.064763   35003 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:40:55.064867   35003 host.go:66] Checking if "ha-218762" exists ...
	I0319 19:40:55.065132   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.065162   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.078879   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I0319 19:40:55.079250   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.079692   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.079711   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.080059   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.080280   35003 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:40:55.080442   35003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:40:55.080492   35003 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:40:55.083128   35003 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:40:55.083540   35003 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:40:55.083582   35003 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:40:55.083687   35003 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:40:55.083870   35003 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:40:55.084040   35003 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:40:55.084186   35003 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:40:55.176541   35003 ssh_runner.go:195] Run: systemctl --version
	I0319 19:40:55.185229   35003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:40:55.203448   35003 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:40:55.203471   35003 api_server.go:166] Checking apiserver status ...
	I0319 19:40:55.203510   35003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:40:55.220827   35003 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5232/cgroup
	W0319 19:40:55.230924   35003 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5232/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:40:55.230971   35003 ssh_runner.go:195] Run: ls
	I0319 19:40:55.235834   35003 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:40:55.243149   35003 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:40:55.243167   35003 status.go:422] ha-218762 apiserver status = Running (err=<nil>)
	I0319 19:40:55.243176   35003 status.go:257] ha-218762 status: &{Name:ha-218762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:40:55.243191   35003 status.go:255] checking status of ha-218762-m02 ...
	I0319 19:40:55.243464   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.243505   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.257870   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0319 19:40:55.258308   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.259021   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.259047   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.260522   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.260740   35003 main.go:141] libmachine: (ha-218762-m02) Calling .GetState
	I0319 19:40:55.262259   35003 status.go:330] ha-218762-m02 host status = "Running" (err=<nil>)
	I0319 19:40:55.262279   35003 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:40:55.262608   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.262652   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.276592   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0319 19:40:55.277062   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.277526   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.277548   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.277835   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.278002   35003 main.go:141] libmachine: (ha-218762-m02) Calling .GetIP
	I0319 19:40:55.280651   35003 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:40:55.281069   35003 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:35:21 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:40:55.281099   35003 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:40:55.281252   35003 host.go:66] Checking if "ha-218762-m02" exists ...
	I0319 19:40:55.281591   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.281630   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.296540   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0319 19:40:55.296881   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.297349   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.297372   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.297666   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.297858   35003 main.go:141] libmachine: (ha-218762-m02) Calling .DriverName
	I0319 19:40:55.298035   35003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:40:55.298058   35003 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHHostname
	I0319 19:40:55.300875   35003 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:40:55.301305   35003 main.go:141] libmachine: (ha-218762-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0e:bd", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:35:21 +0000 UTC Type:0 Mac:52:54:00:ab:0e:bd Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-218762-m02 Clientid:01:52:54:00:ab:0e:bd}
	I0319 19:40:55.301336   35003 main.go:141] libmachine: (ha-218762-m02) DBG | domain ha-218762-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:ab:0e:bd in network mk-ha-218762
	I0319 19:40:55.301514   35003 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHPort
	I0319 19:40:55.301680   35003 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHKeyPath
	I0319 19:40:55.301828   35003 main.go:141] libmachine: (ha-218762-m02) Calling .GetSSHUsername
	I0319 19:40:55.302014   35003 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m02/id_rsa Username:docker}
	I0319 19:40:55.390231   35003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 19:40:55.410616   35003 kubeconfig.go:125] found "ha-218762" server: "https://192.168.39.254:8443"
	I0319 19:40:55.410644   35003 api_server.go:166] Checking apiserver status ...
	I0319 19:40:55.410687   35003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 19:40:55.427612   35003 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0319 19:40:55.438686   35003 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:40:55.438730   35003 ssh_runner.go:195] Run: ls
	I0319 19:40:55.444129   35003 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0319 19:40:55.448810   35003 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0319 19:40:55.448828   35003 status.go:422] ha-218762-m02 apiserver status = Running (err=<nil>)
	I0319 19:40:55.448838   35003 status.go:257] ha-218762-m02 status: &{Name:ha-218762-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 19:40:55.448858   35003 status.go:255] checking status of ha-218762-m04 ...
	I0319 19:40:55.449239   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.449282   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.464252   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
	I0319 19:40:55.464584   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.465071   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.465095   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.465468   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.465659   35003 main.go:141] libmachine: (ha-218762-m04) Calling .GetState
	I0319 19:40:55.467246   35003 status.go:330] ha-218762-m04 host status = "Running" (err=<nil>)
	I0319 19:40:55.467260   35003 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:40:55.467525   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.467581   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.481511   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0319 19:40:55.481873   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.482274   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.482291   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.482579   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.482751   35003 main.go:141] libmachine: (ha-218762-m04) Calling .GetIP
	I0319 19:40:55.485602   35003 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:40:55.486089   35003 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:38:21 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:40:55.486119   35003 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:40:55.486266   35003 host.go:66] Checking if "ha-218762-m04" exists ...
	I0319 19:40:55.486587   35003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:40:55.486631   35003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:40:55.500934   35003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0319 19:40:55.501258   35003 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:40:55.501699   35003 main.go:141] libmachine: Using API Version  1
	I0319 19:40:55.501720   35003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:40:55.502004   35003 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:40:55.502185   35003 main.go:141] libmachine: (ha-218762-m04) Calling .DriverName
	I0319 19:40:55.502339   35003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:40:55.502360   35003 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHHostname
	I0319 19:40:55.504958   35003 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:40:55.505373   35003 main.go:141] libmachine: (ha-218762-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:32:6b", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:38:21 +0000 UTC Type:0 Mac:52:54:00:20:32:6b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-218762-m04 Clientid:01:52:54:00:20:32:6b}
	I0319 19:40:55.505394   35003 main.go:141] libmachine: (ha-218762-m04) DBG | domain ha-218762-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:20:32:6b in network mk-ha-218762
	I0319 19:40:55.505500   35003 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHPort
	I0319 19:40:55.505652   35003 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHKeyPath
	I0319 19:40:55.505775   35003 main.go:141] libmachine: (ha-218762-m04) Calling .GetSSHUsername
	I0319 19:40:55.505870   35003 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762-m04/id_rsa Username:docker}
	W0319 19:41:14.036483   35003 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0319 19:41:14.036584   35003 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0319 19:41:14.036601   35003 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0319 19:41:14.036611   35003 status.go:257] ha-218762-m04 status: &{Name:ha-218762-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0319 19:41:14.036643   35003 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-218762 -n ha-218762
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-218762 logs -n 25: (1.946196507s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m04 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp testdata/cp-test.txt                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m04_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03:/home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m03 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-218762 node stop m02 -v=7                                                     | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-218762 node start m02 -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-218762 -v=7                                                           | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-218762 -v=7                                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-218762 --wait=true -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:33 UTC | 19 Mar 24 19:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-218762                                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC |                     |
	| node    | ha-218762 node delete m03 -v=7                                                   | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC | 19 Mar 24 19:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-218762 stop -v=7                                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:33:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:33:32.286609   33042 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:33:32.286742   33042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:33:32.286756   33042 out.go:304] Setting ErrFile to fd 2...
	I0319 19:33:32.286763   33042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:33:32.286981   33042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:33:32.287531   33042 out.go:298] Setting JSON to false
	I0319 19:33:32.288454   33042 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4510,"bootTime":1710872302,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:33:32.288513   33042 start.go:139] virtualization: kvm guest
	I0319 19:33:32.290964   33042 out.go:177] * [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:33:32.292530   33042 notify.go:220] Checking for updates...
	I0319 19:33:32.292542   33042 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:33:32.294147   33042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:33:32.295577   33042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:33:32.296847   33042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:33:32.298103   33042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:33:32.299357   33042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:33:32.301120   33042 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:33:32.301217   33042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:33:32.301581   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:33:32.301616   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:33:32.316174   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0319 19:33:32.316528   33042 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:33:32.317002   33042 main.go:141] libmachine: Using API Version  1
	I0319 19:33:32.317022   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:33:32.317362   33042 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:33:32.317555   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:33:32.351520   33042 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 19:33:32.352881   33042 start.go:297] selected driver: kvm2
	I0319 19:33:32.352896   33042 start.go:901] validating driver "kvm2" against &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:33:32.353030   33042 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:33:32.353317   33042 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:33:32.353382   33042 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:33:32.367458   33042 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:33:32.368093   33042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:33:32.368167   33042 cni.go:84] Creating CNI manager for ""
	I0319 19:33:32.368181   33042 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0319 19:33:32.368224   33042 start.go:340] cluster config:
	{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:33:32.368376   33042 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:33:32.371078   33042 out.go:177] * Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	I0319 19:33:32.372422   33042 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:33:32.372463   33042 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:33:32.372475   33042 cache.go:56] Caching tarball of preloaded images
	I0319 19:33:32.372560   33042 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:33:32.372572   33042 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:33:32.372677   33042 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:33:32.372859   33042 start.go:360] acquireMachinesLock for ha-218762: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:33:32.372909   33042 start.go:364] duration metric: took 20.811µs to acquireMachinesLock for "ha-218762"
	I0319 19:33:32.372922   33042 start.go:96] Skipping create...Using existing machine configuration
	I0319 19:33:32.372929   33042 fix.go:54] fixHost starting: 
	I0319 19:33:32.373168   33042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:33:32.373198   33042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:33:32.386661   33042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0319 19:33:32.387055   33042 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:33:32.387516   33042 main.go:141] libmachine: Using API Version  1
	I0319 19:33:32.387540   33042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:33:32.387852   33042 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:33:32.388026   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:33:32.388164   33042 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:33:32.389659   33042 fix.go:112] recreateIfNeeded on ha-218762: state=Running err=<nil>
	W0319 19:33:32.389690   33042 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 19:33:32.391669   33042 out.go:177] * Updating the running kvm2 "ha-218762" VM ...
	I0319 19:33:32.392968   33042 machine.go:94] provisionDockerMachine start ...
	I0319 19:33:32.392983   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:33:32.393168   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.395460   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.395844   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.395869   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.395970   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.396139   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.396293   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.396436   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.396580   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:32.396804   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:32.396817   33042 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 19:33:32.509931   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:33:32.509964   33042 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:33:32.510226   33042 buildroot.go:166] provisioning hostname "ha-218762"
	I0319 19:33:32.510251   33042 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:33:32.510422   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.512993   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.513355   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.513380   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.513523   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.513704   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.513863   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.513958   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.514149   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:32.514360   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:32.514374   33042 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762 && echo "ha-218762" | sudo tee /etc/hostname
	I0319 19:33:32.644113   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:33:32.644141   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.647007   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.647413   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.647442   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.647742   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.647904   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.648055   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.648184   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.648334   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:32.648499   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:32.648515   33042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:33:32.757678   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:33:32.757708   33042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:33:32.757747   33042 buildroot.go:174] setting up certificates
	I0319 19:33:32.757759   33042 provision.go:84] configureAuth start
	I0319 19:33:32.757773   33042 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:33:32.758045   33042 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:33:32.760506   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.760819   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.760849   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.761049   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.763318   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.763714   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.763746   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.763882   33042 provision.go:143] copyHostCerts
	I0319 19:33:32.763928   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:33:32.763985   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:33:32.763998   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:33:32.764086   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:33:32.764273   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:33:32.764303   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:33:32.764313   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:33:32.764358   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:33:32.764459   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:33:32.764484   33042 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:33:32.764494   33042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:33:32.764528   33042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:33:32.764614   33042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762 san=[127.0.0.1 192.168.39.200 ha-218762 localhost minikube]
	I0319 19:33:32.930565   33042 provision.go:177] copyRemoteCerts
	I0319 19:33:32.930618   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:33:32.930638   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:32.932945   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.933257   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:32.933277   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:32.933435   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:32.933624   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:32.933785   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:32.933923   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:33:33.022075   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:33:33.022166   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:33:33.051685   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:33:33.051760   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0319 19:33:33.081530   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:33:33.081584   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 19:33:33.110948   33042 provision.go:87] duration metric: took 353.177548ms to configureAuth
	I0319 19:33:33.110973   33042 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:33:33.111164   33042 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:33:33.111223   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:33:33.113603   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:33.114027   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:33:33.114047   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:33:33.114236   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:33:33.114413   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:33.114566   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:33:33.114678   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:33:33.114839   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:33:33.114996   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:33:33.115014   33042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:35:04.164021   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:35:04.164045   33042 machine.go:97] duration metric: took 1m31.771066251s to provisionDockerMachine
	I0319 19:35:04.164074   33042 start.go:293] postStartSetup for "ha-218762" (driver="kvm2")
	I0319 19:35:04.164103   33042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:35:04.164121   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.164484   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:35:04.164548   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.167437   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.167949   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.167990   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.168115   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.168312   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.168483   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.168623   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:35:04.256197   33042 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:35:04.261053   33042 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:35:04.261068   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:35:04.261129   33042 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:35:04.261196   33042 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:35:04.261204   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:35:04.261281   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:35:04.271552   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:35:04.300278   33042 start.go:296] duration metric: took 136.191363ms for postStartSetup
	I0319 19:35:04.300316   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.300610   33042 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0319 19:35:04.300649   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.302797   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.303241   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.303269   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.303373   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.303552   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.303712   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.303856   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	W0319 19:35:04.392001   33042 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0319 19:35:04.392029   33042 fix.go:56] duration metric: took 1m32.019097317s for fixHost
	I0319 19:35:04.392064   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.394408   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.394733   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.394757   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.394947   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.395140   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.395324   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.395478   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.395622   33042 main.go:141] libmachine: Using SSH client type: native
	I0319 19:35:04.395822   33042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:35:04.395836   33042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:35:04.505652   33042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710876904.467368521
	
	I0319 19:35:04.505673   33042 fix.go:216] guest clock: 1710876904.467368521
	I0319 19:35:04.505683   33042 fix.go:229] Guest: 2024-03-19 19:35:04.467368521 +0000 UTC Remote: 2024-03-19 19:35:04.392037453 +0000 UTC m=+92.158762356 (delta=75.331068ms)
	I0319 19:35:04.505712   33042 fix.go:200] guest clock delta is within tolerance: 75.331068ms
	I0319 19:35:04.505717   33042 start.go:83] releasing machines lock for "ha-218762", held for 1m32.132800309s
	I0319 19:35:04.505734   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.505961   33042 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:35:04.508564   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.508939   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.508962   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.509149   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.509814   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.509977   33042 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:35:04.510065   33042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:35:04.510109   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.510155   33042 ssh_runner.go:195] Run: cat /version.json
	I0319 19:35:04.510181   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:35:04.512778   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513063   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513143   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.513168   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513271   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.513449   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.513470   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:04.513504   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:04.513627   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:35:04.513664   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.513801   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:35:04.513818   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:35:04.513942   33042 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:35:04.514088   33042 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:35:04.594396   33042 ssh_runner.go:195] Run: systemctl --version
	I0319 19:35:04.618125   33042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:35:04.787226   33042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:35:04.797150   33042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:35:04.797208   33042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:35:04.808526   33042 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 19:35:04.808550   33042 start.go:494] detecting cgroup driver to use...
	I0319 19:35:04.808622   33042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:35:04.829793   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:35:04.845583   33042 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:35:04.845628   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:35:04.862206   33042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:35:04.906124   33042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:35:05.069464   33042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:35:05.228357   33042 docker.go:233] disabling docker service ...
	I0319 19:35:05.228418   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:35:05.247557   33042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:35:05.263674   33042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:35:05.432853   33042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:35:05.607349   33042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:35:05.623332   33042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:35:05.645123   33042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:35:05.645195   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.658231   33042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:35:05.658287   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.671663   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.685693   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.697626   33042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:35:05.709979   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.722660   33042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.734084   33042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:35:05.746161   33042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:35:05.756224   33042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:35:05.766634   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:35:05.914317   33042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:35:07.762548   33042 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.848201785s)
	I0319 19:35:07.762575   33042 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:35:07.762626   33042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:35:07.769716   33042 start.go:562] Will wait 60s for crictl version
	I0319 19:35:07.769808   33042 ssh_runner.go:195] Run: which crictl
	I0319 19:35:07.774555   33042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:35:07.829114   33042 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:35:07.829189   33042 ssh_runner.go:195] Run: crio --version
	I0319 19:35:07.861271   33042 ssh_runner.go:195] Run: crio --version
	I0319 19:35:07.896563   33042 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:35:07.897907   33042 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:35:07.900412   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:07.900801   33042 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:35:07.900831   33042 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:35:07.901046   33042 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:35:07.907304   33042 kubeadm.go:877] updating cluster {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:35:07.907605   33042 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:35:07.907675   33042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:35:07.952307   33042 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:35:07.952333   33042 crio.go:433] Images already preloaded, skipping extraction
	I0319 19:35:07.952403   33042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:35:07.990516   33042 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:35:07.990538   33042 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:35:07.990547   33042 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.29.3 crio true true} ...
	I0319 19:35:07.990634   33042 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:35:07.990693   33042 ssh_runner.go:195] Run: crio config
	I0319 19:35:08.048605   33042 cni.go:84] Creating CNI manager for ""
	I0319 19:35:08.048629   33042 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0319 19:35:08.048643   33042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:35:08.048672   33042 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-218762 NodeName:ha-218762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:35:08.048855   33042 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-218762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:35:08.048883   33042 kube-vip.go:111] generating kube-vip config ...
	I0319 19:35:08.048936   33042 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:35:08.062255   33042 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:35:08.062371   33042 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:35:08.062452   33042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:35:08.073781   33042 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:35:08.073854   33042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0319 19:35:08.085167   33042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0319 19:35:08.105200   33042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:35:08.124185   33042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 19:35:08.143856   33042 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:35:08.162255   33042 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:35:08.167529   33042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:35:08.316341   33042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:35:08.334164   33042 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.200
	I0319 19:35:08.334190   33042 certs.go:194] generating shared ca certs ...
	I0319 19:35:08.334209   33042 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:35:08.334405   33042 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:35:08.334458   33042 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:35:08.334471   33042 certs.go:256] generating profile certs ...
	I0319 19:35:08.334595   33042 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:35:08.334636   33042 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188
	I0319 19:35:08.334653   33042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.15 192.168.39.254]
	I0319 19:35:08.426565   33042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188 ...
	I0319 19:35:08.426593   33042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188: {Name:mkc6ecf9faceb5a51d2be70a6f76e2e5b034bbc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:35:08.426761   33042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188 ...
	I0319 19:35:08.426773   33042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188: {Name:mk90ae9d7217424d4e02d14fe627f22b3debef47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:35:08.426843   33042 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.6f8bc188 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:35:08.426974   33042 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.6f8bc188 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:35:08.427093   33042 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:35:08.427108   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:35:08.427122   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:35:08.427136   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:35:08.427149   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:35:08.427161   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:35:08.427171   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:35:08.427188   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:35:08.427199   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:35:08.427253   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:35:08.427278   33042 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:35:08.427289   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:35:08.427311   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:35:08.427332   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:35:08.427353   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:35:08.427387   33042 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:35:08.427414   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.427428   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.427440   33042 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.428021   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:35:08.456237   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:35:08.482884   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:35:08.508928   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:35:08.535736   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 19:35:08.562934   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:35:08.590304   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:35:08.616243   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:35:08.643109   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:35:08.669254   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:35:08.695463   33042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:35:08.721538   33042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:35:08.739866   33042 ssh_runner.go:195] Run: openssl version
	I0319 19:35:08.747769   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:35:08.767018   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.772617   33042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.772677   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:35:08.779620   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:35:08.790459   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:35:08.802550   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.807717   33042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.807772   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:35:08.814137   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:35:08.824297   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:35:08.835732   33042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.840938   33042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.840992   33042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:35:08.847209   33042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:35:08.857432   33042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:35:08.862515   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 19:35:08.868692   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 19:35:08.874718   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 19:35:08.881188   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 19:35:08.887064   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 19:35:08.892726   33042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 19:35:08.898477   33042 kubeadm.go:391] StartCluster: {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:35:08.898583   33042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:35:08.898616   33042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:35:08.949323   33042 cri.go:89] found id: "a1d65870cde5260ce0500ccb71289c0f86b6801fac75a0e0fa9f049e3c71f5af"
	I0319 19:35:08.949341   33042 cri.go:89] found id: "d1de004c738171c7323ca701e5d648369cd503aa9cd2d5e8906959b5fcce539f"
	I0319 19:35:08.949344   33042 cri.go:89] found id: "9f2ba9b095fcc8dd64cb30696eb6e7a3126bf13e385800dd50e3866f7118578f"
	I0319 19:35:08.949347   33042 cri.go:89] found id: "488c85cb47be84f1519718650c1207789e6ff34e6a6d8dbdcae93c151a17f3ae"
	I0319 19:35:08.949350   33042 cri.go:89] found id: "109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57"
	I0319 19:35:08.949353   33042 cri.go:89] found id: "4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3"
	I0319 19:35:08.949356   33042 cri.go:89] found id: "49e04c50e3c86f3487e25d0d15f6323cce7a866985bcce74e5dbd0e51b9fa851"
	I0319 19:35:08.949358   33042 cri.go:89] found id: "ee8377d7b6d9ab60c27927f3316da1f4b57d3f5c0e41d767c103947ecf29e986"
	I0319 19:35:08.949360   33042 cri.go:89] found id: "ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6"
	I0319 19:35:08.949365   33042 cri.go:89] found id: "da2851243bc4cbcb4b941232e861e9fd41ca1d342747fd6cfd5fafc638015ca8"
	I0319 19:35:08.949368   33042 cri.go:89] found id: "dc37df944702003608d704925db1515b753c461128e874e10764393af312326c"
	I0319 19:35:08.949370   33042 cri.go:89] found id: "136b31ae3d9927e8377775f0b7c5f4f56f4f1efb51a098b418310ea990bd3bda"
	I0319 19:35:08.949376   33042 cri.go:89] found id: "82c2c39ac3bd92f9654cd97da458e06f5f5955f90aa222d8f81f1f3148088fab"
	I0319 19:35:08.949378   33042 cri.go:89] found id: "b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a"
	I0319 19:35:08.949383   33042 cri.go:89] found id: ""
	I0319 19:35:08.949419   33042 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.726729644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877274726706861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d20b2e8a-3161-48e4-9b2a-511facb40580 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.727654265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c16e9b9-639f-4648-86fc-b9084f21afab name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.727740154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c16e9b9-639f-4648-86fc-b9084f21afab name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.728264644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c16e9b9-639f-4648-86fc-b9084f21afab name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.790551255Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59d984cf-87a7-4083-8599-7e37d74803b2 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.790623428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59d984cf-87a7-4083-8599-7e37d74803b2 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.792536589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f22ed6a7-4cdc-4b38-8870-37bd49a6b479 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.793571016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877274793542500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f22ed6a7-4cdc-4b38-8870-37bd49a6b479 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.794241008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daea4345-524b-4bc9-a3a1-a225f6191ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.794302552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daea4345-524b-4bc9-a3a1-a225f6191ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.795764233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daea4345-524b-4bc9-a3a1-a225f6191ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.846383409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4947ce85-2d15-4a98-aa2f-0b0737490362 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.846491673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4947ce85-2d15-4a98-aa2f-0b0737490362 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.847771192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0d30ad0-a3a2-4c27-9a5b-4eecd0973334 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.848280742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877274848256633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0d30ad0-a3a2-4c27-9a5b-4eecd0973334 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.848889664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35d75d31-6ab5-49d2-b1a4-e5a982be0c76 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.848947034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35d75d31-6ab5-49d2-b1a4-e5a982be0c76 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.849349148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35d75d31-6ab5-49d2-b1a4-e5a982be0c76 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.897350867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d74842ae-c93e-4c4f-a53e-7462bb8841d4 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.897456594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d74842ae-c93e-4c4f-a53e-7462bb8841d4 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.899088159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77192136-a1aa-4b63-98c8-c8b5a37da3dd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.899703847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877274899679148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77192136-a1aa-4b63-98c8-c8b5a37da3dd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.900903506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=821567b1-107b-429a-b6a2-d3c537324069 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.900987361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=821567b1-107b-429a-b6a2-d3c537324069 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:41:14 ha-218762 crio[3796]: time="2024-03-19 19:41:14.901375724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877001161617527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710876963152425344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710876958168270684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710876958152097687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710876947623167092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c,PodSandboxId:9e1751c3a1b965e73adcecf9c73f263beedb653706cce5ac59e1b7483971c1a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710876946150735752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710876930321529094,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710876914400353390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb8
15b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914624839975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710876914560020759,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db,PodSandboxId:9f5d0382c34c1904000206972723136a1b0f266efae9c5271e6395238cb99f1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710876914331183792,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc,PodSandboxId:592738c55d5d7989d4ed83b4c676f52b050ee301a8ec84a8ab64f6fdc4215482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710876914096101812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9
205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710876914176324715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f7
1536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710876914085044021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernete
s.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0,PodSandboxId:a8ecc5bc666eb7300b0b06547c58224d219c6395aeeafd0173a4a32a86360b7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710876914013182102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kuber
netes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5224aff0311e7eed25e0c3313f812359947fe962b801b0eac41baba5e9dd7b,PodSandboxId:03d5a8bf10dee3fbb1578a778b5bc041b29d0d5b19109e492eb977768f2cfea9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876423582559343,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubern
etes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57,PodSandboxId:42b1b389a8129ccd56f9ec9c4433ed0c54aac56f9e94c05a4ab44d1dc1fe1b30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252812296534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3,PodSandboxId:9e44b306f2e4f08700f1608b4c50fb7b6fd7df0ba4f56a06d55e8a0148a10e7e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876252774019266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6,PodSandboxId:c02a60ba78138ced76d3f5934b318b1f5c6fc02b67bddd8878a15ea8b7e0c0d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0
acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876249681293906,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc37df944702003608d704925db1515b753c461128e874e10764393af312326c,PodSandboxId:59a484b792912d8098da56945396fe19c005c10ee1f6ddc90a569fe2f03ac314,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d
7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876229364964367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a,PodSandboxId:ffe45f05ed53ac695f768d7d9d20b38a855efcddb2e2122cb9ba455e15760f89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedA
t:1710876229130012625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=821567b1-107b-429a-b6a2-d3c537324069 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57b67e1d9f711       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   9e1751c3a1b96       storage-provisioner
	b97e6744af918       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               3                   9f5d0382c34c1       kindnet-d8pkw
	cd231cd9e49b3       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Running             kube-controller-manager   2                   a8ecc5bc666eb       kube-controller-manager-ha-218762
	6338f56543282       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Running             kube-apiserver            3                   592738c55d5d7       kube-apiserver-ha-218762
	e184092c1753d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   9ae1282eca7fd       busybox-7fdf7869d9-d8xsk
	b3ac103d077b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   9e1751c3a1b96       storage-provisioner
	29b54ac96c6cb       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   241791cae01a3       kube-vip-ha-218762
	3759bb815b0bd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   1                   8b012633323a1       coredns-76f75df574-6f64w
	7fe718f015a06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   1                   b66ed00d03541       coredns-76f75df574-zlz9l
	e64d30502df53       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      6 minutes ago       Running             kube-proxy                1                   a0b75df1436e1       kube-proxy-qd8kk
	8edf1240fc777       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Exited              kindnet-cni               2                   9f5d0382c34c1       kindnet-d8pkw
	e004ed7f983d2       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            1                   3ee688cdd562c       kube-scheduler-ha-218762
	d744b8d4b2141       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Exited              kube-apiserver            2                   592738c55d5d7       kube-apiserver-ha-218762
	89ce3ef06f55e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      1                   c1a4e502ec750       etcd-ha-218762
	76c29ad320500       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Exited              kube-controller-manager   1                   a8ecc5bc666eb       kube-controller-manager-ha-218762
	5d5224aff0311       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   03d5a8bf10dee       busybox-7fdf7869d9-d8xsk
	109c2437b7712       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   42b1b389a8129       coredns-76f75df574-6f64w
	4c1e36efc888a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   9e44b306f2e4f       coredns-76f75df574-zlz9l
	ab7b5d52d6006       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      17 minutes ago      Exited              kube-proxy                0                   c02a60ba78138       kube-proxy-qd8kk
	dc37df9447020       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   59a484b792912       etcd-ha-218762
	b8f592d52269d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      17 minutes ago      Exited              kube-scheduler            0                   ffe45f05ed53a       kube-scheduler-ha-218762
	
	
	==> coredns [109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57] <==
	[INFO] 10.244.0.4:33585 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003760519s
	[INFO] 10.244.0.4:59082 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137291s
	[INFO] 10.244.0.4:40935 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118623s
	[INFO] 10.244.0.4:47943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107248s
	[INFO] 10.244.0.4:59058 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076766s
	[INFO] 10.244.1.2:50311 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001848487s
	[INFO] 10.244.1.2:43198 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174765s
	[INFO] 10.244.1.2:52346 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001415553s
	[INFO] 10.244.1.2:43441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076976s
	[INFO] 10.244.1.2:34726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138048s
	[INFO] 10.244.1.2:45607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112925s
	[INFO] 10.244.2.2:40744 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001749217s
	[INFO] 10.244.2.2:53029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111621s
	[INFO] 10.244.2.2:40938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014131s
	[INFO] 10.244.2.2:56391 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130828s
	[INFO] 10.244.1.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015755s
	[INFO] 10.244.2.2:42534 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120056s
	[INFO] 10.244.2.2:54358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316425s
	[INFO] 10.244.0.4:60417 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238089s
	[INFO] 10.244.0.4:60483 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144782s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3759bb815b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3] <==
	[INFO] 10.244.2.2:44372 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161391s
	[INFO] 10.244.0.4:55323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007536s
	[INFO] 10.244.0.4:36522 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010122s
	[INFO] 10.244.0.4:59910 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068387s
	[INFO] 10.244.0.4:56467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053097s
	[INFO] 10.244.1.2:47288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107648s
	[INFO] 10.244.1.2:47476 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075973s
	[INFO] 10.244.1.2:33459 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186954s
	[INFO] 10.244.2.2:42752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177891s
	[INFO] 10.244.2.2:55553 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189177s
	[INFO] 10.244.0.4:39711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067897s
	[INFO] 10.244.0.4:46192 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.002995771s
	[INFO] 10.244.1.2:52462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000332016s
	[INFO] 10.244.1.2:33081 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215617s
	[INFO] 10.244.1.2:48821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092021s
	[INFO] 10.244.1.2:39937 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000452168s
	[INFO] 10.244.2.2:43887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122925s
	[INFO] 10.244.2.2:38523 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093183s
	[INFO] 10.244.2.2:56286 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149396s
	[INFO] 10.244.2.2:33782 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081737s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45534->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45534->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45532->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45532->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-218762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:41:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:41:06 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:41:06 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:41:06 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:41:06 +0000   Tue, 19 Mar 2024 19:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-218762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee6305e340734ffab00fb0013188dc6a
	  System UUID:                ee6305e3-4073-4ffa-b00f-b0013188dc6a
	  Boot ID:                    4a3c9f80-1526-4057-9e0e-fd3e10e41bd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d8xsk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-76f75df574-6f64w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-76f75df574-zlz9l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-218762                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-d8pkw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-218762             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-218762    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-qd8kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-218762             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-218762                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m14s                  kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-218762 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Warning  ContainerGCFailed        6m19s (x2 over 7m19s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m9s                   node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           5m2s                   node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	
	
	Name:               ha-218762-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:25:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:41:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:39:28 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:39:28 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:39:28 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:39:28 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-218762-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21ee6ca9760341f0b88147e7d26bc5a4
	  System UUID:                21ee6ca9-7603-41f0-b881-47e7d26bc5a4
	  Boot ID:                    93ea4244-1402-4285-9999-90af84712cb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ds2kh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-218762-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-4b7jg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-218762-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-218762-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-9q4nx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-218762-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-218762-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-218762-m02 status is now: NodeNotReady
	  Normal  Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal  NodeNotReady             112s                   node-controller  Node ha-218762-m02 status is now: NodeNotReady
	
	
	Name:               ha-218762-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_27_38_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:27:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:38:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:39:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:39:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:39:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 19 Mar 2024 19:38:27 +0000   Tue, 19 Mar 2024 19:39:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-218762-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3252307468a44b83a5ab5199d03a0035
	  System UUID:                32523074-68a4-4b83-a5ab-5199d03a0035
	  Boot ID:                    a0d24f10-73b5-4b9e-ae00-6b857db48ab4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7l527    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-hslwj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-nth69            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-218762-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m9s                   node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           5m2s                   node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   NodeNotReady             4m29s                  node-controller  Node ha-218762-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-218762-m04 has been rebooted, boot id: a0d24f10-73b5-4b9e-ae00-6b857db48ab4
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-218762-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-218762-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.074231] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.062282] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064060] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.205706] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.113821] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.284359] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.977018] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.063791] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.785726] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.566086] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.304560] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.098669] kauditd_printk_skb: 51 callbacks suppressed
	[Mar19 19:24] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 19:25] kauditd_printk_skb: 74 callbacks suppressed
	[Mar19 19:35] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	[  +0.163668] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +0.200453] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[  +0.176254] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.314534] systemd-fstab-generator[3782]: Ignoring "noauto" option for root device
	[  +2.399621] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +5.371303] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.623371] kauditd_printk_skb: 98 callbacks suppressed
	[ +37.084386] kauditd_printk_skb: 1 callbacks suppressed
	[Mar19 19:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.825614] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff] <==
	{"level":"info","ts":"2024-03-19T19:37:47.894533Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:37:47.897974Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fe8c4457455e3a5","to":"c7942b8fd0a5905a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-19T19:37:47.898067Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:37:50.632277Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:37:50.632376Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7942b8fd0a5905a","rtt":"0s","error":"dial tcp 192.168.39.15:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-19T19:38:40.71783Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.15:60018","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-19T19:38:40.731346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 switched to configuration voters=(1146381907749364645 15038635610201135437)"}
	{"level":"info","ts":"2024-03-19T19:38:40.734088Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","removed-remote-peer-id":"c7942b8fd0a5905a","removed-remote-peer-urls":["https://192.168.39.15:2380"]}
	{"level":"info","ts":"2024-03-19T19:38:40.734213Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.734395Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"fe8c4457455e3a5","removed-member-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.735287Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-03-19T19:38:40.735023Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:38:40.735626Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.745581Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:38:40.745636Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:38:40.746081Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.747512Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a","error":"context canceled"}
	{"level":"warn","ts":"2024-03-19T19:38:40.747676Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"c7942b8fd0a5905a","error":"failed to read c7942b8fd0a5905a on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-19T19:38:40.747724Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.747934Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a","error":"context canceled"}
	{"level":"info","ts":"2024-03-19T19:38:40.747996Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:38:40.748034Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:38:40.748118Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fe8c4457455e3a5","removed-remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.767919Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fe8c4457455e3a5","remote-peer-id-stream-handler":"fe8c4457455e3a5","remote-peer-id-from":"c7942b8fd0a5905a"}
	{"level":"warn","ts":"2024-03-19T19:38:40.773967Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.15:37586","server-name":"","error":"EOF"}
	
	
	==> etcd [dc37df944702003608d704925db1515b753c461128e874e10764393af312326c] <==
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/03/19 19:33:33 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-19T19:33:33.327488Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T19:33:33.328117Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.200:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T19:33:33.329537Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fe8c4457455e3a5","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-19T19:33:33.330028Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330171Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330412Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330641Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330719Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330752Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330765Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c7942b8fd0a5905a"}
	{"level":"info","ts":"2024-03-19T19:33:33.330771Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.330779Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.330879Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.33095Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.330977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.331034Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.331072Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:33:33.333908Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-03-19T19:33:33.334129Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-03-19T19:33:33.334147Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-218762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	
	
	==> kernel <==
	 19:41:15 up 17 min,  0 users,  load average: 0.22, 0.43, 0.37
	Linux ha-218762 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db] <==
	I0319 19:35:15.073713       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0319 19:35:15.082151       1 main.go:107] hostIP = 192.168.39.200
	podIP = 192.168.39.200
	I0319 19:35:15.082344       1 main.go:116] setting mtu 1500 for CNI 
	I0319 19:35:15.082377       1 main.go:146] kindnetd IP family: "ipv4"
	I0319 19:35:15.082414       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0319 19:35:18.243500       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0319 19:35:28.251120       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0319 19:35:30.530434       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0319 19:35:33.602332       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0319 19:35:36.674326       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a] <==
	I0319 19:40:34.740436       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:40:44.747089       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:40:44.747207       1 main.go:227] handling current node
	I0319 19:40:44.747235       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:40:44.747253       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:40:44.747378       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:40:44.747398       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:40:54.762082       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:40:54.762429       1 main.go:227] handling current node
	I0319 19:40:54.762500       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:40:54.762527       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:40:54.762674       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:40:54.762707       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:41:04.769398       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:41:04.769614       1 main.go:227] handling current node
	I0319 19:41:04.769671       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:41:04.769705       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:41:04.769932       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:41:04.769973       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:41:14.784773       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:41:14.784850       1 main.go:227] handling current node
	I0319 19:41:14.784860       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:41:14.784866       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:41:14.785121       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:41:14.785184       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99] <==
	I0319 19:36:00.651624       1 establishing_controller.go:76] Starting EstablishingController
	I0319 19:36:00.651760       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0319 19:36:00.652296       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0319 19:36:00.653001       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0319 19:36:00.736338       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 19:36:00.737739       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 19:36:00.738270       1 aggregator.go:165] initial CRD sync complete...
	I0319 19:36:00.738310       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 19:36:00.738370       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 19:36:00.744583       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 19:36:00.793738       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 19:36:00.831228       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 19:36:00.831303       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0319 19:36:00.831427       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 19:36:00.833585       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 19:36:00.836493       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 19:36:00.839871       1 cache.go:39] Caches are synced for autoregister controller
	W0319 19:36:00.853478       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.234]
	I0319 19:36:00.857322       1 controller.go:624] quota admission added evaluator for: endpoints
	I0319 19:36:00.871921       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0319 19:36:00.875453       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0319 19:36:01.644844       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0319 19:36:02.305662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.200 192.168.39.234]
	W0319 19:36:12.303433       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200 192.168.39.234]
	W0319 19:38:52.313245       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200 192.168.39.234]
	
	
	==> kube-apiserver [d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc] <==
	I0319 19:35:15.124630       1 options.go:222] external host was not specified, using 192.168.39.200
	I0319 19:35:15.131991       1 server.go:148] Version: v1.29.3
	I0319 19:35:15.132389       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:35:15.750666       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0319 19:35:15.765707       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0319 19:35:15.765750       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0319 19:35:15.770127       1 instance.go:297] Using reconciler: lease
	W0319 19:35:35.749996       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0319 19:35:35.750099       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0319 19:35:35.771547       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	W0319 19:35:35.771547       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0] <==
	I0319 19:35:15.860675       1 serving.go:380] Generated self-signed cert in-memory
	I0319 19:35:16.293930       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0319 19:35:16.294052       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:35:16.296398       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 19:35:16.296604       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 19:35:16.297549       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 19:35:16.297633       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0319 19:35:36.778981       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.200:8443/healthz\": dial tcp 192.168.39.200:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca] <==
	I0319 19:39:29.233512       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-nth69" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:39:29.271291       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7l527" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 19:39:29.314107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.977009ms"
	I0319 19:39:29.314209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.463µs"
	I0319 19:39:32.796273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="10.263773ms"
	I0319 19:39:32.796665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.191µs"
	E0319 19:39:33.902496       1 gc_controller.go:153] "Failed to get node" err="node \"ha-218762-m03\" not found" node="ha-218762-m03"
	E0319 19:39:33.902630       1 gc_controller.go:153] "Failed to get node" err="node \"ha-218762-m03\" not found" node="ha-218762-m03"
	E0319 19:39:33.902657       1 gc_controller.go:153] "Failed to get node" err="node \"ha-218762-m03\" not found" node="ha-218762-m03"
	E0319 19:39:33.902700       1 gc_controller.go:153] "Failed to get node" err="node \"ha-218762-m03\" not found" node="ha-218762-m03"
	E0319 19:39:33.902724       1 gc_controller.go:153] "Failed to get node" err="node \"ha-218762-m03\" not found" node="ha-218762-m03"
	I0319 19:39:33.915174       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-scheduler-ha-218762-m03"
	I0319 19:39:33.948015       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-scheduler-ha-218762-m03"
	I0319 19:39:33.948068       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-vip-ha-218762-m03"
	I0319 19:39:33.977115       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-vip-ha-218762-m03"
	I0319 19:39:33.977238       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-lq48k"
	I0319 19:39:34.007404       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-lq48k"
	I0319 19:39:34.007536       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-wv72v"
	I0319 19:39:34.041501       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-wv72v"
	I0319 19:39:34.041636       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-controller-manager-ha-218762-m03"
	I0319 19:39:34.067112       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-controller-manager-ha-218762-m03"
	I0319 19:39:34.067190       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/etcd-ha-218762-m03"
	I0319 19:39:34.096735       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/etcd-ha-218762-m03"
	I0319 19:39:34.096849       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-apiserver-ha-218762-m03"
	I0319 19:39:34.127615       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-apiserver-ha-218762-m03"
	
	
	==> kube-proxy [ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6] <==
	E0319 19:32:28.965404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:32.035632       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:32.035877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:32.036002       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:32.036067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:35.108641       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:35.108872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:38.179557       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:38.180161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:38.180089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:38.180241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:41.252377       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:41.252670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:47.396133       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:47.396305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:50.467197       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:50.467719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:32:50.467919       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:32:50.467971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:33:08.899025       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:33:08.899222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:33:15.043609       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:33:15.043783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:33:18.114394       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:33:18.114516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec] <==
	I0319 19:35:16.231246       1 server_others.go:72] "Using iptables proxy"
	E0319 19:35:17.922736       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:20.994762       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:24.067129       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:30.212164       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:42.499409       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0319 19:36:01.210939       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0319 19:36:01.284898       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:36:01.284967       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:36:01.285005       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:36:01.289546       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:36:01.290085       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:36:01.290131       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:36:01.292784       1 config.go:188] "Starting service config controller"
	I0319 19:36:01.292917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:36:01.294451       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:36:01.294490       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:36:01.295482       1 config.go:315] "Starting node config controller"
	I0319 19:36:01.295520       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:36:01.394506       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:36:01.395963       1 shared_informer.go:318] Caches are synced for node config
	I0319 19:36:01.396039       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a] <==
	E0319 19:33:26.378038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:33:26.726044       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 19:33:26.726127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 19:33:26.748198       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 19:33:26.748273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 19:33:26.928301       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 19:33:26.928396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 19:33:26.976197       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:33:26.976221       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 19:33:27.006723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 19:33:27.006751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 19:33:27.093694       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 19:33:27.093728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 19:33:27.351454       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0319 19:33:27.351539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 19:33:27.352719       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 19:33:27.352776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 19:33:27.472941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 19:33:27.473141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 19:33:28.231106       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:33:28.231163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:33:28.321232       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 19:33:28.321317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0319 19:33:33.250655       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 19:33:33.262223       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	
	
	==> kube-scheduler [e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91] <==
	W0319 19:35:55.457722       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.200:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:55.458021       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.200:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:55.459458       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.200:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:55.459554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.200:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:56.245959       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:56.246001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:56.710434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:56.710481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:57.032096       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:57.032202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:57.643428       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:57.643496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.000457       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.000501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.179736       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.179931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.398246       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.398328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:36:00.710582       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 19:36:00.710646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0319 19:36:19.987053       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0319 19:38:37.419998       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-d2xjc\": pod busybox-7fdf7869d9-d2xjc is already assigned to node \"ha-218762-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-d2xjc" node="ha-218762-m04"
	E0319 19:38:37.420742       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 2b74c569-a965-4d06-9151-f04ea13408a5(default/busybox-7fdf7869d9-d2xjc) wasn't assumed so cannot be forgotten"
	E0319 19:38:37.421117       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-d2xjc\": pod busybox-7fdf7869d9-d2xjc is already assigned to node \"ha-218762-m04\"" pod="default/busybox-7fdf7869d9-d2xjc"
	I0319 19:38:37.421215       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-d2xjc" node="ha-218762-m04"
	
	
	==> kubelet <==
	Mar 19 19:36:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:36:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:36:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:36:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:37:02 ha-218762 kubelet[1386]: I0319 19:37:02.573520    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-218762" podStartSLOduration=23.573443292 podStartE2EDuration="23.573443292s" podCreationTimestamp="2024-03-19 19:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-19 19:36:42.214034831 +0000 UTC m=+766.269691857" watchObservedRunningTime="2024-03-19 19:37:02.573443292 +0000 UTC m=+786.629100339"
	Mar 19 19:37:56 ha-218762 kubelet[1386]: E0319 19:37:56.168290    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:37:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:37:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:37:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:37:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:38:56 ha-218762 kubelet[1386]: E0319 19:38:56.167559    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:38:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:38:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:38:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:38:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:39:56 ha-218762 kubelet[1386]: E0319 19:39:56.167650    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:39:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:39:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:39:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:39:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:40:56 ha-218762 kubelet[1386]: E0319 19:40:56.166984    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:40:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:40:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:40:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:40:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 19:41:14.398747   35132 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18453-10028/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-218762 -n ha-218762
helpers_test.go:261: (dbg) Run:  kubectl --context ha-218762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (719.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-218762 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0319 19:44:30.844233   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:45:04.834982   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:49:30.843497   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:50:04.834669   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:53:07.881237   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-218762 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: signal: killed (11m56.864419857s)

                                                
                                                
-- stdout --
	* [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	* Updating the running kvm2 "ha-218762" VM ...
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-218762-m02" control-plane node in "ha-218762" cluster
	* Updating the running kvm2 "ha-218762-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.200
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.200
	* Verifying Kubernetes components...
	
	* Starting "ha-218762-m04" worker node in "ha-218762" cluster
	* Restarting existing kvm2 VM for "ha-218762-m04" ...
	* Found network options:
	  - NO_PROXY=192.168.39.200,192.168.39.234
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.200
	  - env NO_PROXY=192.168.39.200,192.168.39.234
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:41:16.722846   35208 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:41:16.723080   35208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:41:16.723089   35208 out.go:304] Setting ErrFile to fd 2...
	I0319 19:41:16.723094   35208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:41:16.723277   35208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:41:16.723785   35208 out.go:298] Setting JSON to false
	I0319 19:41:16.724685   35208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4975,"bootTime":1710872302,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:41:16.724741   35208 start.go:139] virtualization: kvm guest
	I0319 19:41:16.727163   35208 out.go:177] * [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:41:16.728663   35208 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:41:16.728671   35208 notify.go:220] Checking for updates...
	I0319 19:41:16.729931   35208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:41:16.731349   35208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:41:16.732843   35208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:41:16.734159   35208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:41:16.735562   35208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:41:16.737534   35208 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:41:16.738118   35208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:41:16.738164   35208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:41:16.752912   35208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0319 19:41:16.753254   35208 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:41:16.753759   35208 main.go:141] libmachine: Using API Version  1
	I0319 19:41:16.753779   35208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:41:16.754111   35208 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:41:16.754284   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.754560   35208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:41:16.754811   35208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:41:16.754842   35208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:41:16.769161   35208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39511
	I0319 19:41:16.769542   35208 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:41:16.769971   35208 main.go:141] libmachine: Using API Version  1
	I0319 19:41:16.770013   35208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:41:16.770307   35208 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:41:16.770479   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.805919   35208 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 19:41:16.807279   35208 start.go:297] selected driver: kvm2
	I0319 19:41:16.807291   35208 start.go:901] validating driver "kvm2" against &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:41:16.807436   35208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:41:16.807730   35208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:41:16.807815   35208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:41:16.821943   35208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:41:16.822589   35208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:41:16.822659   35208 cni.go:84] Creating CNI manager for ""
	I0319 19:41:16.822671   35208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 19:41:16.822729   35208 start.go:340] cluster config:
	{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:41:16.822869   35208 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:41:16.825314   35208 out.go:177] * Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	I0319 19:41:16.826488   35208 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:41:16.826520   35208 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:41:16.826533   35208 cache.go:56] Caching tarball of preloaded images
	I0319 19:41:16.826625   35208 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:41:16.826636   35208 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:41:16.826755   35208 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:41:16.826949   35208 start.go:360] acquireMachinesLock for ha-218762: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:41:16.827002   35208 start.go:364] duration metric: took 31.919µs to acquireMachinesLock for "ha-218762"
	I0319 19:41:16.827022   35208 start.go:96] Skipping create...Using existing machine configuration
	I0319 19:41:16.827031   35208 fix.go:54] fixHost starting: 
	I0319 19:41:16.827288   35208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:41:16.827326   35208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:41:16.840550   35208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0319 19:41:16.840937   35208 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:41:16.841387   35208 main.go:141] libmachine: Using API Version  1
	I0319 19:41:16.841410   35208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:41:16.841702   35208 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:41:16.841877   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.842037   35208 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:41:16.843627   35208 fix.go:112] recreateIfNeeded on ha-218762: state=Running err=<nil>
	W0319 19:41:16.843646   35208 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 19:41:16.846414   35208 out.go:177] * Updating the running kvm2 "ha-218762" VM ...
	I0319 19:41:16.847718   35208 machine.go:94] provisionDockerMachine start ...
	I0319 19:41:16.847736   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.847931   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:16.850441   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.850860   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:16.850898   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.851025   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:16.851185   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.851337   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.851452   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:16.851596   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:16.851790   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:16.851803   35208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 19:41:16.965993   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:41:16.966017   35208 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:41:16.966280   35208 buildroot.go:166] provisioning hostname "ha-218762"
	I0319 19:41:16.966305   35208 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:41:16.966527   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:16.969036   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.969448   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:16.969479   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.969647   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:16.969845   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.970009   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.970145   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:16.970307   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:16.970485   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:16.970499   35208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762 && echo "ha-218762" | sudo tee /etc/hostname
	I0319 19:41:17.113991   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:41:17.114013   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.116962   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.117351   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.117392   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.117610   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:17.117802   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.117973   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.118105   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:17.118225   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:17.118394   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:17.118411   35208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:41:17.229575   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:41:17.229604   35208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:41:17.229649   35208 buildroot.go:174] setting up certificates
	I0319 19:41:17.229661   35208 provision.go:84] configureAuth start
	I0319 19:41:17.229678   35208 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:41:17.229933   35208 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:41:17.232658   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.233095   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.233115   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.233265   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.235742   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.236133   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.236162   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.236299   35208 provision.go:143] copyHostCerts
	I0319 19:41:17.236326   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:41:17.236357   35208 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:41:17.236366   35208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:41:17.236431   35208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:41:17.236502   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:41:17.236525   35208 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:41:17.236535   35208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:41:17.236569   35208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:41:17.236661   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:41:17.236685   35208 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:41:17.236695   35208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:41:17.236734   35208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:41:17.236806   35208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762 san=[127.0.0.1 192.168.39.200 ha-218762 localhost minikube]
	I0319 19:41:17.404251   35208 provision.go:177] copyRemoteCerts
	I0319 19:41:17.404345   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:41:17.404366   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.407053   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.407434   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.407460   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.407635   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:17.407820   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.407969   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:17.408121   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:41:17.497355   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:41:17.497422   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:41:17.530581   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:41:17.530647   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0319 19:41:17.566597   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:41:17.566665   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 19:41:17.595371   35208 provision.go:87] duration metric: took 365.696289ms to configureAuth
	I0319 19:41:17.595394   35208 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:41:17.595636   35208 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:41:17.595714   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.598401   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.598793   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.598822   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.598981   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:17.599193   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.599350   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.599519   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:17.599675   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:17.599864   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:17.599889   35208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:42:52.485831   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:42:52.485871   35208 machine.go:97] duration metric: took 1m35.638131731s to provisionDockerMachine
	I0319 19:42:52.485897   35208 start.go:293] postStartSetup for "ha-218762" (driver="kvm2")
	I0319 19:42:52.485909   35208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:42:52.485927   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.486276   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:42:52.486302   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.489926   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.490368   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.490403   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.490533   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.490746   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.490968   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.491112   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:42:52.581637   35208 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:42:52.586866   35208 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:42:52.586894   35208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:42:52.586968   35208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:42:52.587065   35208 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:42:52.587077   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:42:52.587186   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:42:52.597449   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:42:52.627642   35208 start.go:296] duration metric: took 141.73325ms for postStartSetup
	I0319 19:42:52.627683   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.627983   35208 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0319 19:42:52.628015   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.630823   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.631246   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.631267   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.631463   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.631645   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.631805   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.631946   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	W0319 19:42:52.723150   35208 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0319 19:42:52.723176   35208 fix.go:56] duration metric: took 1m35.896145073s for fixHost
	I0319 19:42:52.723202   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.725960   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.726326   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.726354   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.726529   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.726699   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.726866   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.727015   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.727183   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:42:52.727327   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:42:52.727337   35208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0319 19:42:52.837288   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710877372.793681474
	
	I0319 19:42:52.837309   35208 fix.go:216] guest clock: 1710877372.793681474
	I0319 19:42:52.837316   35208 fix.go:229] Guest: 2024-03-19 19:42:52.793681474 +0000 UTC Remote: 2024-03-19 19:42:52.723184592 +0000 UTC m=+96.046233179 (delta=70.496882ms)
	I0319 19:42:52.837371   35208 fix.go:200] guest clock delta is within tolerance: 70.496882ms
	I0319 19:42:52.837379   35208 start.go:83] releasing machines lock for "ha-218762", held for 1m36.010365328s
	I0319 19:42:52.837405   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.837669   35208 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:42:52.840360   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.840744   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.840762   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.840950   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.841491   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.841662   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.841723   35208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:42:52.841769   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.841868   35208 ssh_runner.go:195] Run: cat /version.json
	I0319 19:42:52.841893   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.844531   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.844556   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.844890   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.844920   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.844947   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.844963   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.845195   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.845209   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.845371   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.845409   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.845517   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.845565   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.845716   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:42:52.845733   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:42:52.979617   35208 ssh_runner.go:195] Run: systemctl --version
	I0319 19:42:53.002785   35208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:42:53.220725   35208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:42:53.232392   35208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:42:53.232450   35208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:42:53.249851   35208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 19:42:53.249881   35208 start.go:494] detecting cgroup driver to use...
	I0319 19:42:53.249937   35208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:42:53.271320   35208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:42:53.286718   35208 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:42:53.286778   35208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:42:53.302477   35208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:42:53.317014   35208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:42:53.478834   35208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:42:53.642433   35208 docker.go:233] disabling docker service ...
	I0319 19:42:53.642504   35208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:42:53.665498   35208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:42:53.680831   35208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:42:53.839658   35208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:42:54.008022   35208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:42:54.027025   35208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:42:54.048701   35208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:42:54.048759   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.061329   35208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:42:54.061393   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.076953   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.088346   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.102476   35208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:42:54.115577   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.130929   35208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.143218   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.157961   35208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:42:54.170666   35208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:42:54.182485   35208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:42:54.325865   35208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:44:28.626329   35208 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.300422214s)
	I0319 19:44:28.626364   35208 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:44:28.626426   35208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:44:28.634391   35208 start.go:562] Will wait 60s for crictl version
	I0319 19:44:28.634454   35208 ssh_runner.go:195] Run: which crictl
	I0319 19:44:28.638792   35208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:44:28.683452   35208 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:44:28.683526   35208 ssh_runner.go:195] Run: crio --version
	I0319 19:44:28.717521   35208 ssh_runner.go:195] Run: crio --version
	I0319 19:44:28.754397   35208 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:44:28.755829   35208 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:44:28.758523   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:44:28.758934   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:44:28.758966   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:44:28.759084   35208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:44:28.764360   35208 kubeadm.go:877] updating cluster {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:44:28.764477   35208 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:44:28.764517   35208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:44:28.843474   35208 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:44:28.843493   35208 crio.go:433] Images already preloaded, skipping extraction
	I0319 19:44:28.843538   35208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:44:28.882942   35208 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:44:28.882959   35208 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:44:28.882969   35208 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.29.3 crio true true} ...
	I0319 19:44:28.883106   35208 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:44:28.883179   35208 ssh_runner.go:195] Run: crio config
	I0319 19:44:28.940881   35208 cni.go:84] Creating CNI manager for ""
	I0319 19:44:28.940899   35208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 19:44:28.940908   35208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:44:28.940926   35208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-218762 NodeName:ha-218762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:44:28.941041   35208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-218762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:44:28.941061   35208 kube-vip.go:111] generating kube-vip config ...
	I0319 19:44:28.941096   35208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:44:28.954823   35208 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:44:28.954941   35208 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:44:28.955017   35208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:44:28.969211   35208 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:44:28.969271   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0319 19:44:28.980593   35208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0319 19:44:28.999938   35208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:44:29.019651   35208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 19:44:29.040543   35208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:44:29.059211   35208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:44:29.063794   35208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:44:29.217711   35208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:44:29.235557   35208 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.200
	I0319 19:44:29.235586   35208 certs.go:194] generating shared ca certs ...
	I0319 19:44:29.235608   35208 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:44:29.235784   35208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:44:29.235832   35208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:44:29.235845   35208 certs.go:256] generating profile certs ...
	I0319 19:44:29.235939   35208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:44:29.235974   35208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b
	I0319 19:44:29.235992   35208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.254]
	I0319 19:44:29.345052   35208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b ...
	I0319 19:44:29.345079   35208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b: {Name:mkd29a79762cc534100f07cae164518eef04551c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:44:29.345242   35208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b ...
	I0319 19:44:29.345255   35208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b: {Name:mk529b650166385138eb1d2ad329973d7a28d535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:44:29.345322   35208 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:44:29.345451   35208 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:44:29.345568   35208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:44:29.345584   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:44:29.345596   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:44:29.345608   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:44:29.345621   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:44:29.345634   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:44:29.345646   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:44:29.345658   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:44:29.345672   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:44:29.345730   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:44:29.345757   35208 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:44:29.345767   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:44:29.345787   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:44:29.345809   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:44:29.345841   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:44:29.345877   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:44:29.345904   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.345917   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.345931   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.346402   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:44:29.374347   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:44:29.401719   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:44:29.428926   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:44:29.456966   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 19:44:29.485465   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:44:29.513105   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:44:29.539658   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:44:29.566586   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:44:29.595011   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:44:29.622754   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:44:29.650348   35208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:44:29.669862   35208 ssh_runner.go:195] Run: openssl version
	I0319 19:44:29.676931   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:44:29.690087   35208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.695507   35208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.695562   35208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.701954   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:44:29.713329   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:44:29.725776   35208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.730733   35208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.730789   35208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.738024   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:44:29.749004   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:44:29.761296   35208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.766484   35208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.766524   35208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.772975   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:44:29.785509   35208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:44:29.790774   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 19:44:29.797378   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 19:44:29.803595   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 19:44:29.809997   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 19:44:29.816293   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 19:44:29.822531   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 19:44:29.828877   35208 kubeadm.go:391] StartCluster: {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:44:29.828970   35208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:44:29.829020   35208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:44:29.892645   35208 cri.go:89] found id: "57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1"
	I0319 19:44:29.892663   35208 cri.go:89] found id: "b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a"
	I0319 19:44:29.892666   35208 cri.go:89] found id: "cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca"
	I0319 19:44:29.892669   35208 cri.go:89] found id: "6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99"
	I0319 19:44:29.892671   35208 cri.go:89] found id: "b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c"
	I0319 19:44:29.892674   35208 cri.go:89] found id: "29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc"
	I0319 19:44:29.892677   35208 cri.go:89] found id: "3759bb815b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0"
	I0319 19:44:29.892680   35208 cri.go:89] found id: "7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	I0319 19:44:29.892683   35208 cri.go:89] found id: "e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec"
	I0319 19:44:29.892688   35208 cri.go:89] found id: "8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db"
	I0319 19:44:29.892690   35208 cri.go:89] found id: "e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91"
	I0319 19:44:29.892693   35208 cri.go:89] found id: "d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc"
	I0319 19:44:29.892695   35208 cri.go:89] found id: "89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff"
	I0319 19:44:29.892698   35208 cri.go:89] found id: "76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0"
	I0319 19:44:29.892702   35208 cri.go:89] found id: "109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57"
	I0319 19:44:29.892704   35208 cri.go:89] found id: "4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3"
	I0319 19:44:29.892707   35208 cri.go:89] found id: "ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6"
	I0319 19:44:29.892711   35208 cri.go:89] found id: "dc37df944702003608d704925db1515b753c461128e874e10764393af312326c"
	I0319 19:44:29.892713   35208 cri.go:89] found id: "b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a"
	I0319 19:44:29.892716   35208 cri.go:89] found id: ""
	I0319 19:44:29.892752   35208 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-218762 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-218762 -n ha-218762
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-218762 logs -n 25: (1.873463947s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m04 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp testdata/cp-test.txt                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762:/home/docker/cp-test_ha-218762-m04_ha-218762.txt                       |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762 sudo cat                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762.txt                                 |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m02:/home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m02 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m03:/home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n                                                                 | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | ha-218762-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-218762 ssh -n ha-218762-m03 sudo cat                                          | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC | 19 Mar 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-218762 node stop m02 -v=7                                                     | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-218762 node start m02 -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-218762 -v=7                                                           | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-218762 -v=7                                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-218762 --wait=true -v=7                                                    | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:33 UTC | 19 Mar 24 19:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-218762                                                                | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC |                     |
	| node    | ha-218762 node delete m03 -v=7                                                   | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC | 19 Mar 24 19:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-218762 stop -v=7                                                              | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-218762 --wait=true                                                         | ha-218762 | jenkins | v1.32.0 | 19 Mar 24 19:41 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:41:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:41:16.722846   35208 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:41:16.723080   35208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:41:16.723089   35208 out.go:304] Setting ErrFile to fd 2...
	I0319 19:41:16.723094   35208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:41:16.723277   35208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:41:16.723785   35208 out.go:298] Setting JSON to false
	I0319 19:41:16.724685   35208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4975,"bootTime":1710872302,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:41:16.724741   35208 start.go:139] virtualization: kvm guest
	I0319 19:41:16.727163   35208 out.go:177] * [ha-218762] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:41:16.728663   35208 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:41:16.728671   35208 notify.go:220] Checking for updates...
	I0319 19:41:16.729931   35208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:41:16.731349   35208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:41:16.732843   35208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:41:16.734159   35208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:41:16.735562   35208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:41:16.737534   35208 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:41:16.738118   35208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:41:16.738164   35208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:41:16.752912   35208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0319 19:41:16.753254   35208 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:41:16.753759   35208 main.go:141] libmachine: Using API Version  1
	I0319 19:41:16.753779   35208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:41:16.754111   35208 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:41:16.754284   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.754560   35208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:41:16.754811   35208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:41:16.754842   35208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:41:16.769161   35208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39511
	I0319 19:41:16.769542   35208 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:41:16.769971   35208 main.go:141] libmachine: Using API Version  1
	I0319 19:41:16.770013   35208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:41:16.770307   35208 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:41:16.770479   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.805919   35208 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 19:41:16.807279   35208 start.go:297] selected driver: kvm2
	I0319 19:41:16.807291   35208 start.go:901] validating driver "kvm2" against &{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:41:16.807436   35208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:41:16.807730   35208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:41:16.807815   35208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:41:16.821943   35208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:41:16.822589   35208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 19:41:16.822659   35208 cni.go:84] Creating CNI manager for ""
	I0319 19:41:16.822671   35208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 19:41:16.822729   35208 start.go:340] cluster config:
	{Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:41:16.822869   35208 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:41:16.825314   35208 out.go:177] * Starting "ha-218762" primary control-plane node in "ha-218762" cluster
	I0319 19:41:16.826488   35208 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:41:16.826520   35208 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:41:16.826533   35208 cache.go:56] Caching tarball of preloaded images
	I0319 19:41:16.826625   35208 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 19:41:16.826636   35208 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 19:41:16.826755   35208 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/config.json ...
	I0319 19:41:16.826949   35208 start.go:360] acquireMachinesLock for ha-218762: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 19:41:16.827002   35208 start.go:364] duration metric: took 31.919µs to acquireMachinesLock for "ha-218762"
	I0319 19:41:16.827022   35208 start.go:96] Skipping create...Using existing machine configuration
	I0319 19:41:16.827031   35208 fix.go:54] fixHost starting: 
	I0319 19:41:16.827288   35208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:41:16.827326   35208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:41:16.840550   35208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0319 19:41:16.840937   35208 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:41:16.841387   35208 main.go:141] libmachine: Using API Version  1
	I0319 19:41:16.841410   35208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:41:16.841702   35208 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:41:16.841877   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.842037   35208 main.go:141] libmachine: (ha-218762) Calling .GetState
	I0319 19:41:16.843627   35208 fix.go:112] recreateIfNeeded on ha-218762: state=Running err=<nil>
	W0319 19:41:16.843646   35208 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 19:41:16.846414   35208 out.go:177] * Updating the running kvm2 "ha-218762" VM ...
	I0319 19:41:16.847718   35208 machine.go:94] provisionDockerMachine start ...
	I0319 19:41:16.847736   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:41:16.847931   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:16.850441   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.850860   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:16.850898   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.851025   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:16.851185   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.851337   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.851452   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:16.851596   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:16.851790   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:16.851803   35208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 19:41:16.965993   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:41:16.966017   35208 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:41:16.966280   35208 buildroot.go:166] provisioning hostname "ha-218762"
	I0319 19:41:16.966305   35208 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:41:16.966527   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:16.969036   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.969448   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:16.969479   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:16.969647   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:16.969845   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.970009   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:16.970145   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:16.970307   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:16.970485   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:16.970499   35208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-218762 && echo "ha-218762" | sudo tee /etc/hostname
	I0319 19:41:17.113991   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-218762
	
	I0319 19:41:17.114013   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.116962   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.117351   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.117392   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.117610   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:17.117802   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.117973   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.118105   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:17.118225   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:17.118394   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:17.118411   35208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-218762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-218762/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-218762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:41:17.229575   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:41:17.229604   35208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 19:41:17.229649   35208 buildroot.go:174] setting up certificates
	I0319 19:41:17.229661   35208 provision.go:84] configureAuth start
	I0319 19:41:17.229678   35208 main.go:141] libmachine: (ha-218762) Calling .GetMachineName
	I0319 19:41:17.229933   35208 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:41:17.232658   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.233095   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.233115   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.233265   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.235742   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.236133   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.236162   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.236299   35208 provision.go:143] copyHostCerts
	I0319 19:41:17.236326   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:41:17.236357   35208 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 19:41:17.236366   35208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 19:41:17.236431   35208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 19:41:17.236502   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:41:17.236525   35208 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 19:41:17.236535   35208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 19:41:17.236569   35208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 19:41:17.236661   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:41:17.236685   35208 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 19:41:17.236695   35208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 19:41:17.236734   35208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 19:41:17.236806   35208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.ha-218762 san=[127.0.0.1 192.168.39.200 ha-218762 localhost minikube]
	I0319 19:41:17.404251   35208 provision.go:177] copyRemoteCerts
	I0319 19:41:17.404345   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:41:17.404366   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.407053   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.407434   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.407460   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.407635   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:17.407820   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.407969   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:17.408121   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:41:17.497355   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 19:41:17.497422   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:41:17.530581   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 19:41:17.530647   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0319 19:41:17.566597   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 19:41:17.566665   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 19:41:17.595371   35208 provision.go:87] duration metric: took 365.696289ms to configureAuth
	I0319 19:41:17.595394   35208 buildroot.go:189] setting minikube options for container-runtime
	I0319 19:41:17.595636   35208 config.go:182] Loaded profile config "ha-218762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:41:17.595714   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:41:17.598401   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.598793   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:41:17.598822   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:41:17.598981   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:41:17.599193   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.599350   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:41:17.599519   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:41:17.599675   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:41:17.599864   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:41:17.599889   35208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:42:52.485831   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:42:52.485871   35208 machine.go:97] duration metric: took 1m35.638131731s to provisionDockerMachine
	I0319 19:42:52.485897   35208 start.go:293] postStartSetup for "ha-218762" (driver="kvm2")
	I0319 19:42:52.485909   35208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:42:52.485927   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.486276   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:42:52.486302   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.489926   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.490368   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.490403   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.490533   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.490746   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.490968   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.491112   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:42:52.581637   35208 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:42:52.586866   35208 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 19:42:52.586894   35208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 19:42:52.586968   35208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 19:42:52.587065   35208 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 19:42:52.587077   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 19:42:52.587186   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:42:52.597449   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:42:52.627642   35208 start.go:296] duration metric: took 141.73325ms for postStartSetup
	I0319 19:42:52.627683   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.627983   35208 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0319 19:42:52.628015   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.630823   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.631246   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.631267   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.631463   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.631645   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.631805   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.631946   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	W0319 19:42:52.723150   35208 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0319 19:42:52.723176   35208 fix.go:56] duration metric: took 1m35.896145073s for fixHost
	I0319 19:42:52.723202   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.725960   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.726326   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.726354   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.726529   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.726699   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.726866   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.727015   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.727183   35208 main.go:141] libmachine: Using SSH client type: native
	I0319 19:42:52.727327   35208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0319 19:42:52.727337   35208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 19:42:52.837288   35208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710877372.793681474
	
	I0319 19:42:52.837309   35208 fix.go:216] guest clock: 1710877372.793681474
	I0319 19:42:52.837316   35208 fix.go:229] Guest: 2024-03-19 19:42:52.793681474 +0000 UTC Remote: 2024-03-19 19:42:52.723184592 +0000 UTC m=+96.046233179 (delta=70.496882ms)
	I0319 19:42:52.837371   35208 fix.go:200] guest clock delta is within tolerance: 70.496882ms
	I0319 19:42:52.837379   35208 start.go:83] releasing machines lock for "ha-218762", held for 1m36.010365328s
	I0319 19:42:52.837405   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.837669   35208 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:42:52.840360   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.840744   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.840762   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.840950   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.841491   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.841662   35208 main.go:141] libmachine: (ha-218762) Calling .DriverName
	I0319 19:42:52.841723   35208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:42:52.841769   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.841868   35208 ssh_runner.go:195] Run: cat /version.json
	I0319 19:42:52.841893   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHHostname
	I0319 19:42:52.844531   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.844556   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.844890   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.844920   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.844947   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:42:52.844963   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:42:52.845195   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.845209   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHPort
	I0319 19:42:52.845371   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.845409   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHKeyPath
	I0319 19:42:52.845517   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.845565   35208 main.go:141] libmachine: (ha-218762) Calling .GetSSHUsername
	I0319 19:42:52.845716   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:42:52.845733   35208 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/ha-218762/id_rsa Username:docker}
	I0319 19:42:52.979617   35208 ssh_runner.go:195] Run: systemctl --version
	I0319 19:42:53.002785   35208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:42:53.220725   35208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 19:42:53.232392   35208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 19:42:53.232450   35208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:42:53.249851   35208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 19:42:53.249881   35208 start.go:494] detecting cgroup driver to use...
	I0319 19:42:53.249937   35208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:42:53.271320   35208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:42:53.286718   35208 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:42:53.286778   35208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:42:53.302477   35208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:42:53.317014   35208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:42:53.478834   35208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:42:53.642433   35208 docker.go:233] disabling docker service ...
	I0319 19:42:53.642504   35208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:42:53.665498   35208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:42:53.680831   35208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:42:53.839658   35208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:42:54.008022   35208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:42:54.027025   35208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:42:54.048701   35208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 19:42:54.048759   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.061329   35208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:42:54.061393   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.076953   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.088346   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.102476   35208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:42:54.115577   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.130929   35208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.143218   35208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:42:54.157961   35208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:42:54.170666   35208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:42:54.182485   35208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:42:54.325865   35208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:44:28.626329   35208 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.300422214s)
	I0319 19:44:28.626364   35208 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:44:28.626426   35208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:44:28.634391   35208 start.go:562] Will wait 60s for crictl version
	I0319 19:44:28.634454   35208 ssh_runner.go:195] Run: which crictl
	I0319 19:44:28.638792   35208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:44:28.683452   35208 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 19:44:28.683526   35208 ssh_runner.go:195] Run: crio --version
	I0319 19:44:28.717521   35208 ssh_runner.go:195] Run: crio --version
	I0319 19:44:28.754397   35208 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 19:44:28.755829   35208 main.go:141] libmachine: (ha-218762) Calling .GetIP
	I0319 19:44:28.758523   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:44:28.758934   35208 main.go:141] libmachine: (ha-218762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:ad:c2", ip: ""} in network mk-ha-218762: {Iface:virbr1 ExpiryTime:2024-03-19 20:23:29 +0000 UTC Type:0 Mac:52:54:00:2b:ad:c2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-218762 Clientid:01:52:54:00:2b:ad:c2}
	I0319 19:44:28.758966   35208 main.go:141] libmachine: (ha-218762) DBG | domain ha-218762 has defined IP address 192.168.39.200 and MAC address 52:54:00:2b:ad:c2 in network mk-ha-218762
	I0319 19:44:28.759084   35208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 19:44:28.764360   35208 kubeadm.go:877] updating cluster {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:44:28.764477   35208 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:44:28.764517   35208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:44:28.843474   35208 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:44:28.843493   35208 crio.go:433] Images already preloaded, skipping extraction
	I0319 19:44:28.843538   35208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:44:28.882942   35208 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:44:28.882959   35208 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:44:28.882969   35208 kubeadm.go:928] updating node { 192.168.39.200 8443 v1.29.3 crio true true} ...
	I0319 19:44:28.883106   35208 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-218762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:44:28.883179   35208 ssh_runner.go:195] Run: crio config
	I0319 19:44:28.940881   35208 cni.go:84] Creating CNI manager for ""
	I0319 19:44:28.940899   35208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 19:44:28.940908   35208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 19:44:28.940926   35208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-218762 NodeName:ha-218762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:44:28.941041   35208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-218762"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:44:28.941061   35208 kube-vip.go:111] generating kube-vip config ...
	I0319 19:44:28.941096   35208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0319 19:44:28.954823   35208 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0319 19:44:28.954941   35208 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0319 19:44:28.955017   35208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 19:44:28.969211   35208 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:44:28.969271   35208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0319 19:44:28.980593   35208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0319 19:44:28.999938   35208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:44:29.019651   35208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 19:44:29.040543   35208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0319 19:44:29.059211   35208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0319 19:44:29.063794   35208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:44:29.217711   35208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:44:29.235557   35208 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762 for IP: 192.168.39.200
	I0319 19:44:29.235586   35208 certs.go:194] generating shared ca certs ...
	I0319 19:44:29.235608   35208 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:44:29.235784   35208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 19:44:29.235832   35208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 19:44:29.235845   35208 certs.go:256] generating profile certs ...
	I0319 19:44:29.235939   35208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/client.key
	I0319 19:44:29.235974   35208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b
	I0319 19:44:29.235992   35208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200 192.168.39.234 192.168.39.254]
	I0319 19:44:29.345052   35208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b ...
	I0319 19:44:29.345079   35208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b: {Name:mkd29a79762cc534100f07cae164518eef04551c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:44:29.345242   35208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b ...
	I0319 19:44:29.345255   35208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b: {Name:mk529b650166385138eb1d2ad329973d7a28d535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:44:29.345322   35208 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt.51328e3b -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt
	I0319 19:44:29.345451   35208 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key.51328e3b -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key
	I0319 19:44:29.345568   35208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key
	I0319 19:44:29.345584   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 19:44:29.345596   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 19:44:29.345608   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 19:44:29.345621   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 19:44:29.345634   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 19:44:29.345646   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 19:44:29.345658   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 19:44:29.345672   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 19:44:29.345730   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 19:44:29.345757   35208 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 19:44:29.345767   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 19:44:29.345787   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:44:29.345809   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:44:29.345841   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 19:44:29.345877   35208 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 19:44:29.345904   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.345917   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.345931   35208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.346402   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:44:29.374347   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:44:29.401719   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:44:29.428926   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:44:29.456966   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 19:44:29.485465   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 19:44:29.513105   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:44:29.539658   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/ha-218762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 19:44:29.566586   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 19:44:29.595011   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 19:44:29.622754   35208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:44:29.650348   35208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:44:29.669862   35208 ssh_runner.go:195] Run: openssl version
	I0319 19:44:29.676931   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:44:29.690087   35208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.695507   35208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.695562   35208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:44:29.701954   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:44:29.713329   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 19:44:29.725776   35208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.730733   35208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.730789   35208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 19:44:29.738024   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 19:44:29.749004   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 19:44:29.761296   35208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.766484   35208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.766524   35208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 19:44:29.772975   35208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:44:29.785509   35208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:44:29.790774   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 19:44:29.797378   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 19:44:29.803595   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 19:44:29.809997   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 19:44:29.816293   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 19:44:29.822531   35208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 19:44:29.828877   35208 kubeadm.go:391] StartCluster: {Name:ha-218762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-218762 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.234 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:44:29.828970   35208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:44:29.829020   35208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:44:29.892645   35208 cri.go:89] found id: "57b67e1d9f71141c6f4f6f4b957958035b283dfca501b77981d1b74818ce4db1"
	I0319 19:44:29.892663   35208 cri.go:89] found id: "b97e6744af918e0a6261eb2d8bcffd93cddffe8d1e7dac960c123e06bbc3159a"
	I0319 19:44:29.892666   35208 cri.go:89] found id: "cd231cd9e49b3bdaa5129b1920f7a3f13cb3945bfc88fe936352caf5d2fd24ca"
	I0319 19:44:29.892669   35208 cri.go:89] found id: "6338f5654328272875bc7f69bbd52a9d23bd38cc097b510ff12597bb38c06d99"
	I0319 19:44:29.892671   35208 cri.go:89] found id: "b3ac103d077b7c8bdf08a2b9be60375c27ffbd3c1115dacf84d1e4b332ba486c"
	I0319 19:44:29.892674   35208 cri.go:89] found id: "29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc"
	I0319 19:44:29.892677   35208 cri.go:89] found id: "3759bb815b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0"
	I0319 19:44:29.892680   35208 cri.go:89] found id: "7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	I0319 19:44:29.892683   35208 cri.go:89] found id: "e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec"
	I0319 19:44:29.892688   35208 cri.go:89] found id: "8edf1240fc777c190f51409f022cbb052aa5e5a883ae32e71f2badc583c643db"
	I0319 19:44:29.892690   35208 cri.go:89] found id: "e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91"
	I0319 19:44:29.892693   35208 cri.go:89] found id: "d744b8d4b214183d33f26a5da25f91ab6e9af4f9eb80c41f50646291266262dc"
	I0319 19:44:29.892695   35208 cri.go:89] found id: "89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff"
	I0319 19:44:29.892698   35208 cri.go:89] found id: "76c29ad320500dff047f4ebb2d8b7477d0e87b271930025438f41d07e8cb0ad0"
	I0319 19:44:29.892702   35208 cri.go:89] found id: "109c2437b77127999b28eccefe736d6870b97c9eda16dc17355cb7053cebcd57"
	I0319 19:44:29.892704   35208 cri.go:89] found id: "4c1e36efc888a7064bb5bdfbe4a83995877d517236245521efa5e3bad97821f3"
	I0319 19:44:29.892707   35208 cri.go:89] found id: "ab7b5d52d6006536caac7af05d747e6abce37928a9db5b08a14a32a9f8db1ec6"
	I0319 19:44:29.892711   35208 cri.go:89] found id: "dc37df944702003608d704925db1515b753c461128e874e10764393af312326c"
	I0319 19:44:29.892713   35208 cri.go:89] found id: "b8f592d52269dabfe2a7042eb916bba9e73611bdbaf7b6350299574d5f36224a"
	I0319 19:44:29.892716   35208 cri.go:89] found id: ""
	I0319 19:44:29.892752   35208 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.266846707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aacf61dd-9d0b-4b8c-9cb3-d5c73683b2d7 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.270482125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52101933-12b1-4df9-8e0c-8a425e9a0bd4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.271007738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877994270983759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52101933-12b1-4df9-8e0c-8a425e9a0bd4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.271999489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59a70a97-d455-4209-94bc-3cf946128ef7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.272069943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59a70a97-d455-4209-94bc-3cf946128ef7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.273878345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:394f3ed197acab6ce3f6d0f0be1be987a4b19fdaf547fec0b527c418bbb80f99,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710877657145966258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef0bf5b8ee178ac1bd3ccd037311c8555a6ee1b54cf2a1f95255a44f879cd7b,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877647146925567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae89f00be21caeb51c3b930f89bd44a92976156935ea9212269c13db55060ab7,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710877646151750469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db0276f991bdb5680479e7c36e60718ab424e15270e003b223ec54300c006be3,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710877566155742012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7740d2acec0e277ace683db79688e13f8f5d145b699576d2cb9fdf8437be66c9,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710877543149482532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d91a329ed1a59fc03e85ea500208e87b966b74742f36bb5b17b30a018d1aeda,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710877542156333344,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386331c37e6d5f31ab1aab91b8f41c7ff6657770791472b1109398cb165212f,PodSandboxId:d4aa7041b98ced437d5bf6ed2d01face37c1c65ed3e208dcb48ff6a2ef16d1c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710877507498310922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6a316576b6b59e8571d3e1ff4b5445b0504e6eab804477b7ad88c70c3536a,PodSandboxId:bd23735d897acc1f03120d93a0dae2f8776716ce8898483ff3b744706ba65e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710877474711685098,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:1891bdddbfa72fb25c19daad8753839059b755f370a6499fb460e59d4f428c01,PodSandboxId:d3b27dac0c3fd67226c01d059595877fa9c58edd9f10707a73cbd53f06a5f982,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710877474837143927,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9900074827
fa9df87697dc90d2d7d47ff87dc199457cb3c408ca3ce5709acdb,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710877474376581603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f444266d3832b9383319fb49
dced0758e388af0b1a95f5d7756207d43618dee2,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710877474484694380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a21c6ec01381fea73b0c3ef8252397cf1d68413c3fe9b5e9231b23706
f6463,PodSandboxId:473c5b6210925c6d00541ce956def96caef8be0409bb01af9f2c99ff3a9626fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710877474585550448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd7c6593d1b1878f8338992a30639313ec454be7c7e5559b117f2dc61a647a,PodSandboxId:40e41dcc71168b0c3893b5151e314a9ec4f3686c2e1bb64fc9f06f27426670ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710877474280632897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b941b0f8db6de5d0aa436de34faaade5febdbdbb14c2b2925ec02deb93770e,PodSandboxId:3b3adc6bc03b27082c4ca5d2067677a929c37d55bfd7dafa70916047c20ee4fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710877474323415463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876947623239701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710876930321569757,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64d30502df53
7d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876914400363038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb815b0bd9f7c551da75693063ffa4e643d3787b518
033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914625197391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914560502616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710876914176381833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128
447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876914085162092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string
{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59a70a97-d455-4209-94bc-3cf946128ef7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.324162856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=622fb998-450c-48b6-9f4f-6136453a095c name=/runtime.v1.RuntimeService/Version
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.324252637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=622fb998-450c-48b6-9f4f-6136453a095c name=/runtime.v1.RuntimeService/Version
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.326680485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d665bc64-72ac-4ade-9d46-0ee3f4d73ea0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.327340105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877994327210920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d665bc64-72ac-4ade-9d46-0ee3f4d73ea0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.328974133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d713918a-4ad9-46de-8df2-1ff61eac455a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.329100437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d713918a-4ad9-46de-8df2-1ff61eac455a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.329473155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:394f3ed197acab6ce3f6d0f0be1be987a4b19fdaf547fec0b527c418bbb80f99,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710877657145966258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef0bf5b8ee178ac1bd3ccd037311c8555a6ee1b54cf2a1f95255a44f879cd7b,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877647146925567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae89f00be21caeb51c3b930f89bd44a92976156935ea9212269c13db55060ab7,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710877646151750469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db0276f991bdb5680479e7c36e60718ab424e15270e003b223ec54300c006be3,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710877566155742012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7740d2acec0e277ace683db79688e13f8f5d145b699576d2cb9fdf8437be66c9,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710877543149482532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d91a329ed1a59fc03e85ea500208e87b966b74742f36bb5b17b30a018d1aeda,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710877542156333344,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386331c37e6d5f31ab1aab91b8f41c7ff6657770791472b1109398cb165212f,PodSandboxId:d4aa7041b98ced437d5bf6ed2d01face37c1c65ed3e208dcb48ff6a2ef16d1c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710877507498310922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6a316576b6b59e8571d3e1ff4b5445b0504e6eab804477b7ad88c70c3536a,PodSandboxId:bd23735d897acc1f03120d93a0dae2f8776716ce8898483ff3b744706ba65e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710877474711685098,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:1891bdddbfa72fb25c19daad8753839059b755f370a6499fb460e59d4f428c01,PodSandboxId:d3b27dac0c3fd67226c01d059595877fa9c58edd9f10707a73cbd53f06a5f982,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710877474837143927,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9900074827
fa9df87697dc90d2d7d47ff87dc199457cb3c408ca3ce5709acdb,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710877474376581603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f444266d3832b9383319fb49
dced0758e388af0b1a95f5d7756207d43618dee2,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710877474484694380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a21c6ec01381fea73b0c3ef8252397cf1d68413c3fe9b5e9231b23706
f6463,PodSandboxId:473c5b6210925c6d00541ce956def96caef8be0409bb01af9f2c99ff3a9626fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710877474585550448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd7c6593d1b1878f8338992a30639313ec454be7c7e5559b117f2dc61a647a,PodSandboxId:40e41dcc71168b0c3893b5151e314a9ec4f3686c2e1bb64fc9f06f27426670ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710877474280632897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b941b0f8db6de5d0aa436de34faaade5febdbdbb14c2b2925ec02deb93770e,PodSandboxId:3b3adc6bc03b27082c4ca5d2067677a929c37d55bfd7dafa70916047c20ee4fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710877474323415463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876947623239701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710876930321569757,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64d30502df53
7d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876914400363038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb815b0bd9f7c551da75693063ffa4e643d3787b518
033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914625197391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914560502616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710876914176381833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128
447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876914085162092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string
{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d713918a-4ad9-46de-8df2-1ff61eac455a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.351199473Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=39526116-2cbe-4746-b24c-a03f44903b8c name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.351578552Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d4aa7041b98ced437d5bf6ed2d01face37c1c65ed3e208dcb48ff6a2ef16d1c8,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-d8xsk,Uid:6f5b6f71-8881-4429-a25f-ca62fef2f65c,Namespace:default,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877507353083521,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:26:59.828745702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-218762,Uid:3a5b9205182474b16bf57e1daaaef85f,Namespace:kube-system,Attempt:2,},State:S
ANDBOX_READY,CreatedAt:1710877473691723748,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.200:8443,kubernetes.io/config.hash: 3a5b9205182474b16bf57e1daaaef85f,kubernetes.io/config.seen: 2024-03-19T19:23:56.098581417Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:473c5b6210925c6d00541ce956def96caef8be0409bb01af9f2c99ff3a9626fb,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-6f64w,Uid:5b250bb2-07f0-46db-8e58-4584fbe4f882,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473690458944,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250
bb2-07f0-46db-8e58-4584fbe4f882,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:11.346902549Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&PodSandboxMetadata{Name:kindnet-d8pkw,Uid:566eb397-5ea5-4bc5-af28-3c5e9a12346b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473671285147,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:07.723707223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f7cf05b18d5bcd1131c842a49dd56723a8d0c0860e90edec747cbd8924a53d5,Metadata:&PodSandboxMet
adata{Name:coredns-76f75df574-zlz9l,Uid:5fd420b7-5377-4b53-b5c3-4e785436bd9e,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1710877473650420371,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:11.336103952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3b27dac0c3fd67226c01d059595877fa9c58edd9f10707a73cbd53f06a5f982,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-218762,Uid:a778244ddfdc629cac5708ab8625d7e6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710877473641492480,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotati
ons:map[string]string{kubernetes.io/config.hash: a778244ddfdc629cac5708ab8625d7e6,kubernetes.io/config.seen: 2024-03-19T19:35:08.125194064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3b3adc6bc03b27082c4ca5d2067677a929c37d55bfd7dafa70916047c20ee4fa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-218762,Uid:5f302ea3b128447ba623d807f71536e6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473584422883,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5f302ea3b128447ba623d807f71536e6,kubernetes.io/config.seen: 2024-03-19T19:23:56.098583760Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40e41dcc71168b0c3893b5151e314a9ec4f3686c2e1bb64fc9f06f27426670ae,Metadata:&PodSandboxMetadata{Name:e
tcd-ha-218762,Uid:f50238912ac80f884e60452838997ec3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473576399442,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.200:2379,kubernetes.io/config.hash: f50238912ac80f884e60452838997ec3,kubernetes.io/config.seen: 2024-03-19T19:23:56.098577382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd23735d897acc1f03120d93a0dae2f8776716ce8898483ff3b744706ba65e7a,Metadata:&PodSandboxMetadata{Name:kube-proxy-qd8kk,Uid:5c7dcc06-c11b-4173-9b5b-49aef039c7ee,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473570150382,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:07.716109830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-218762,Uid:5f7614111d98075e40b8f2e738a2e9cf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473558781845,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5f7614111d98075e40b8f2e738a2e9cf,kubernetes.io/config.seen: 2024-03-19T19:23:56.098582474Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6a496ada-aaf7-47a5-bd5d-5d909ef5df10,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710877473554901784,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"im
agePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-19T19:24:11.345369829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-d8xsk,Uid:6f5b6f71-8881-4429-a25f-ca62fef2f65c,Namespace:default,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710876947476595211,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:26:59.828745702Z,kubernetes.io/config.s
ource: api,},RuntimeHandler:,},&PodSandbox{Id:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-218762,Uid:a778244ddfdc629cac5708ab8625d7e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710876930216774255,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{kubernetes.io/config.hash: a778244ddfdc629cac5708ab8625d7e6,kubernetes.io/config.seen: 2024-03-19T19:35:08.125194064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-6f64w,Uid:5b250bb2-07f0-46db-8e58-4584fbe4f882,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710876913713743975,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:11.346902549Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-218762,Uid:5f302ea3b128447ba623d807f71536e6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710876913618623571,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5f302ea3b128447ba623d807f71536e6,kubernetes.io/config.seen: 2024-03-19T19:23:56.098583760Z,kubernetes.io/config.sourc
e: file,},RuntimeHandler:,},&PodSandbox{Id:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-zlz9l,Uid:5fd420b7-5377-4b53-b5c3-4e785436bd9e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710876913617488359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:11.336103952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&PodSandboxMetadata{Name:kube-proxy-qd8kk,Uid:5c7dcc06-c11b-4173-9b5b-49aef039c7ee,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710876913595548553,Labels:map[string]string{controller-revision-hash: 7659797656,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T19:24:07.716109830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&PodSandboxMetadata{Name:etcd-ha-218762,Uid:f50238912ac80f884e60452838997ec3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1710876913557300280,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.200:2379,kubernetes.io/config.hash: f50238912ac80f884e60452838997ec3,kubernetes.i
o/config.seen: 2024-03-19T19:23:56.098577382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=39526116-2cbe-4746-b24c-a03f44903b8c name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.352502868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4b70b4-3b9b-4c8f-991e-8f74b5b7464c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.352584181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4b70b4-3b9b-4c8f-991e-8f74b5b7464c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.353326584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:394f3ed197acab6ce3f6d0f0be1be987a4b19fdaf547fec0b527c418bbb80f99,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710877657145966258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef0bf5b8ee178ac1bd3ccd037311c8555a6ee1b54cf2a1f95255a44f879cd7b,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877647146925567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae89f00be21caeb51c3b930f89bd44a92976156935ea9212269c13db55060ab7,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710877646151750469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db0276f991bdb5680479e7c36e60718ab424e15270e003b223ec54300c006be3,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710877566155742012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7740d2acec0e277ace683db79688e13f8f5d145b699576d2cb9fdf8437be66c9,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710877543149482532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d91a329ed1a59fc03e85ea500208e87b966b74742f36bb5b17b30a018d1aeda,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710877542156333344,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386331c37e6d5f31ab1aab91b8f41c7ff6657770791472b1109398cb165212f,PodSandboxId:d4aa7041b98ced437d5bf6ed2d01face37c1c65ed3e208dcb48ff6a2ef16d1c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710877507498310922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6a316576b6b59e8571d3e1ff4b5445b0504e6eab804477b7ad88c70c3536a,PodSandboxId:bd23735d897acc1f03120d93a0dae2f8776716ce8898483ff3b744706ba65e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710877474711685098,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:1891bdddbfa72fb25c19daad8753839059b755f370a6499fb460e59d4f428c01,PodSandboxId:d3b27dac0c3fd67226c01d059595877fa9c58edd9f10707a73cbd53f06a5f982,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710877474837143927,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9900074827
fa9df87697dc90d2d7d47ff87dc199457cb3c408ca3ce5709acdb,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710877474376581603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f444266d3832b9383319fb49
dced0758e388af0b1a95f5d7756207d43618dee2,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710877474484694380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a21c6ec01381fea73b0c3ef8252397cf1d68413c3fe9b5e9231b23706
f6463,PodSandboxId:473c5b6210925c6d00541ce956def96caef8be0409bb01af9f2c99ff3a9626fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710877474585550448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd7c6593d1b1878f8338992a30639313ec454be7c7e5559b117f2dc61a647a,PodSandboxId:40e41dcc71168b0c3893b5151e314a9ec4f3686c2e1bb64fc9f06f27426670ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710877474280632897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b941b0f8db6de5d0aa436de34faaade5febdbdbb14c2b2925ec02deb93770e,PodSandboxId:3b3adc6bc03b27082c4ca5d2067677a929c37d55bfd7dafa70916047c20ee4fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710877474323415463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876947623239701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710876930321569757,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64d30502df53
7d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876914400363038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb815b0bd9f7c551da75693063ffa4e643d3787b518
033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914625197391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914560502616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710876914176381833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128
447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876914085162092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string
{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf4b70b4-3b9b-4c8f-991e-8f74b5b7464c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.382870094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf556c9b-dfc2-4101-b3a5-932a774ee058 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.382965686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf556c9b-dfc2-4101-b3a5-932a774ee058 name=/runtime.v1.RuntimeService/Version
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.385988873Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8ac540f-ea63-45fc-aa66-b291db79e70b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.386384238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710877994386363717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8ac540f-ea63-45fc-aa66-b291db79e70b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.387012686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6773249c-5436-4e76-a79d-6a6e0ba278dc name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.387091692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6773249c-5436-4e76-a79d-6a6e0ba278dc name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 19:53:14 ha-218762 crio[6654]: time="2024-03-19 19:53:14.387484737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:394f3ed197acab6ce3f6d0f0be1be987a4b19fdaf547fec0b527c418bbb80f99,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710877657145966258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef0bf5b8ee178ac1bd3ccd037311c8555a6ee1b54cf2a1f95255a44f879cd7b,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710877647146925567,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 6,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae89f00be21caeb51c3b930f89bd44a92976156935ea9212269c13db55060ab7,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710877646151750469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db0276f991bdb5680479e7c36e60718ab424e15270e003b223ec54300c006be3,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710877566155742012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7740d2acec0e277ace683db79688e13f8f5d145b699576d2cb9fdf8437be66c9,PodSandboxId:8dfed5244ebbf7f7e5c429ed02d3092af7be0303539c76d15f17038e41071e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710877543149482532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5b9205182474b16bf57e1daaaef85f,},Annotations:map[string]string{io.kubernetes.container.hash: d1e16ab4,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d91a329ed1a59fc03e85ea500208e87b966b74742f36bb5b17b30a018d1aeda,PodSandboxId:3bcbb607dce875dfc4569881915fc5c5ae9a230d8b5b3bb75ef82897520b2f78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710877542156333344,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7614111d98075e40b8f2e738a2e9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386331c37e6d5f31ab1aab91b8f41c7ff6657770791472b1109398cb165212f,PodSandboxId:d4aa7041b98ced437d5bf6ed2d01face37c1c65ed3e208dcb48ff6a2ef16d1c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710877507498310922,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6a316576b6b59e8571d3e1ff4b5445b0504e6eab804477b7ad88c70c3536a,PodSandboxId:bd23735d897acc1f03120d93a0dae2f8776716ce8898483ff3b744706ba65e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710877474711685098,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:1891bdddbfa72fb25c19daad8753839059b755f370a6499fb460e59d4f428c01,PodSandboxId:d3b27dac0c3fd67226c01d059595877fa9c58edd9f10707a73cbd53f06a5f982,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710877474837143927,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9900074827
fa9df87697dc90d2d7d47ff87dc199457cb3c408ca3ce5709acdb,PodSandboxId:3079114128a0b85a1e11119811b1203975a338f280e76f1c53c2b866094b92bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710877474376581603,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a496ada-aaf7-47a5-bd5d-5d909ef5df10,},Annotations:map[string]string{io.kubernetes.container.hash: 54b027a0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f444266d3832b9383319fb49
dced0758e388af0b1a95f5d7756207d43618dee2,PodSandboxId:b66959eda7e556687176975d8be56f614110b50036a0d7a22f1ee96019723a49,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710877474484694380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d8pkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566eb397-5ea5-4bc5-af28-3c5e9a12346b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7563b3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14a21c6ec01381fea73b0c3ef8252397cf1d68413c3fe9b5e9231b23706
f6463,PodSandboxId:473c5b6210925c6d00541ce956def96caef8be0409bb01af9f2c99ff3a9626fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710877474585550448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd7c6593d1b1878f8338992a30639313ec454be7c7e5559b117f2dc61a647a,PodSandboxId:40e41dcc71168b0c3893b5151e314a9ec4f3686c2e1bb64fc9f06f27426670ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710877474280632897,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b941b0f8db6de5d0aa436de34faaade5febdbdbb14c2b2925ec02deb93770e,PodSandboxId:3b3adc6bc03b27082c4ca5d2067677a929c37d55bfd7dafa70916047c20ee4fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710877474323415463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e184092c1753d21c33a2df5c686c95f64502ab58be44b7021bccc7b0bdd994e2,PodSandboxId:9ae1282eca7fdb655b8f20a609f7d6de6e62fecfa998a19d7c0dba658b095b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710876947623239701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-d8xsk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f5b6f71-8881-4429-a25f-ca62fef2f65c,},Annotations:map[string]string{io.kubernetes.container.hash: 700a52b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:29b54ac96c6cba02e31c6a9402db18541a471c986ce9502a266a5538ff42f5dc,PodSandboxId:241791cae01a3739073761fd45365e4b37df0166181bd2a35c80dc2fc36786f0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710876930321569757,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a778244ddfdc629cac5708ab8625d7e6,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e64d30502df53
7d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec,PodSandboxId:a0b75df1436e143a6e894669122322526e950897648de02ce3fbb73967264b52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710876914400363038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd8kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c7dcc06-c11b-4173-9b5b-49aef039c7ee,},Annotations:map[string]string{io.kubernetes.container.hash: d53cc685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759bb815b0bd9f7c551da75693063ffa4e643d3787b518
033b31bc85c7cc8f0,PodSandboxId:8b012633323a107661e99b051eadcd49c18f25106841cf30a8997a4bfb595466,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914625197391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6f64w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b250bb2-07f0-46db-8e58-4584fbe4f882,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a36eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47,PodSandboxId:b66ed00d03541d54ebc1c37df5c896379e073a26c3b5f34ebf5572259f57c59a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710876914560502616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zlz9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fd420b7-5377-4b53-b5c3-4e785436bd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 78a65d9a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91,PodSandboxId:3ee688cdd562c1b1a6f195834a8e916ee61a503ccb51eb8eb4cd44c2da8ff6bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710876914176381833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f302ea3b128
447ba623d807f71536e6,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff,PodSandboxId:c1a4e502ec750069cef30e357a20c1d9283a5c5f50e90a9442cf3260f278c7a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710876914085162092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-218762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f50238912ac80f884e60452838997ec3,},Annotations:map[string]string
{io.kubernetes.container.hash: c6ebe92,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6773249c-5436-4e76-a79d-6a6e0ba278dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	394f3ed197aca       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   5 minutes ago       Running             kube-controller-manager   5                   3bcbb607dce87       kube-controller-manager-ha-218762
	cef0bf5b8ee17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       6                   3079114128a0b       storage-provisioner
	ae89f00be21ca       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   5 minutes ago       Running             kube-apiserver            6                   8dfed5244ebbf       kube-apiserver-ha-218762
	db0276f991bdb       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   7 minutes ago       Running             kindnet-cni               5                   b66959eda7e55       kindnet-d8pkw
	7740d2acec0e2       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   7 minutes ago       Exited              kube-apiserver            5                   8dfed5244ebbf       kube-apiserver-ha-218762
	7d91a329ed1a5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   7 minutes ago       Exited              kube-controller-manager   4                   3bcbb607dce87       kube-controller-manager-ha-218762
	4386331c37e6d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   8 minutes ago       Running             busybox                   2                   d4aa7041b98ce       busybox-7fdf7869d9-d8xsk
	1891bdddbfa72       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   8 minutes ago       Running             kube-vip                  1                   d3b27dac0c3fd       kube-vip-ha-218762
	89c6a316576b6       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   8 minutes ago       Running             kube-proxy                2                   bd23735d897ac       kube-proxy-qd8kk
	14a21c6ec0138       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   8 minutes ago       Running             coredns                   2                   473c5b6210925       coredns-76f75df574-6f64w
	f444266d3832b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   8 minutes ago       Exited              kindnet-cni               4                   b66959eda7e55       kindnet-d8pkw
	b9900074827fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 minutes ago       Exited              storage-provisioner       5                   3079114128a0b       storage-provisioner
	f5b941b0f8db6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   8 minutes ago       Running             kube-scheduler            2                   3b3adc6bc03b2       kube-scheduler-ha-218762
	66fd7c6593d1b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 minutes ago       Running             etcd                      2                   40e41dcc71168       etcd-ha-218762
	e184092c1753d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago      Exited              busybox                   1                   9ae1282eca7fd       busybox-7fdf7869d9-d8xsk
	29b54ac96c6cb       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   17 minutes ago      Exited              kube-vip                  0                   241791cae01a3       kube-vip-ha-218762
	3759bb815b0bd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Exited              coredns                   1                   8b012633323a1       coredns-76f75df574-6f64w
	7fe718f015a06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Exited              coredns                   1                   b66ed00d03541       coredns-76f75df574-zlz9l
	e64d30502df53       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   18 minutes ago      Exited              kube-proxy                1                   a0b75df1436e1       kube-proxy-qd8kk
	e004ed7f983d2       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   18 minutes ago      Exited              kube-scheduler            1                   3ee688cdd562c       kube-scheduler-ha-218762
	89ce3ef06f55e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Exited              etcd                      1                   c1a4e502ec750       etcd-ha-218762
	
	
	==> coredns [14a21c6ec01381fea73b0c3ef8252397cf1d68413c3fe9b5e9231b23706f6463] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53538->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:53538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [3759bb815b0bd9f7c551da75693063ffa4e643d3787b518033b31bc85c7cc8f0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45534->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45534->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45532->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45532->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-218762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_23_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:53:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:51:17 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:51:17 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:51:17 +0000   Tue, 19 Mar 2024 19:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:51:17 +0000   Tue, 19 Mar 2024 19:24:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-218762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee6305e340734ffab00fb0013188dc6a
	  System UUID:                ee6305e3-4073-4ffa-b00f-b0013188dc6a
	  Boot ID:                    4a3c9f80-1526-4057-9e0e-fd3e10e41bd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d8xsk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 coredns-76f75df574-6f64w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 coredns-76f75df574-zlz9l             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-ha-218762                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kindnet-d8pkw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29m
	  kube-system                 kube-apiserver-ha-218762             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-ha-218762    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-qd8kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-ha-218762             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-vip-ha-218762                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 7m54s                kube-proxy       
	  Normal   Starting                 17m                  kube-proxy       
	  Normal   Starting                 29m                  kube-proxy       
	  Normal   Starting                 29m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     29m (x7 over 29m)    kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29m (x8 over 29m)    kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  29m (x8 over 29m)    kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29m                  kubelet          Node ha-218762 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 29m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  29m                  kubelet          Node ha-218762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     29m                  kubelet          Node ha-218762 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29m                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   NodeReady                29m                  kubelet          Node ha-218762 status is now: NodeReady
	  Normal   RegisteredNode           27m                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           26m                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Warning  ContainerGCFailed        9m18s (x5 over 19m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6m53s                node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	  Normal   RegisteredNode           5m26s                node-controller  Node ha-218762 event: Registered Node ha-218762 in Controller
	
	
	Name:               ha-218762-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_25_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:25:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:53:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:51:18 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:51:18 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:51:18 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:51:18 +0000   Tue, 19 Mar 2024 19:39:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-218762-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21ee6ca9760341f0b88147e7d26bc5a4
	  System UUID:                21ee6ca9-7603-41f0-b881-47e7d26bc5a4
	  Boot ID:                    93ea4244-1402-4285-9999-90af84712cb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ds2kh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-ha-218762-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-4b7jg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-218762-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-218762-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-9q4nx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-218762-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-218762-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m1s                   kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           27m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   RegisteredNode           27m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   RegisteredNode           26m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   NodeNotReady             24m                    node-controller  Node ha-218762-m02 status is now: NodeNotReady
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-218762-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-218762-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-218762-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   NodeNotReady             13m                    node-controller  Node ha-218762-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        7m42s (x2 over 8m42s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6m53s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	  Normal   RegisteredNode           5m26s                  node-controller  Node ha-218762-m02 event: Registered Node ha-218762-m02 in Controller
	
	
	Name:               ha-218762-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-218762-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=ha-218762
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T19_27_38_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-218762-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 19:53:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 19:53:02 +0000   Tue, 19 Mar 2024 19:52:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 19:53:02 +0000   Tue, 19 Mar 2024 19:52:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 19:53:02 +0000   Tue, 19 Mar 2024 19:52:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 19:53:02 +0000   Tue, 19 Mar 2024 19:52:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-218762-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3252307468a44b83a5ab5199d03a0035
	  System UUID:                32523074-68a4-4b83-a5ab-5199d03a0035
	  Boot ID:                    135d0815-874d-4a4a-95b6-d7944b75cba8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7l527    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-hslwj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-proxy-nth69            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           25m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   NodeReady                25m                kubelet          Node ha-218762-m04 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   NodeNotReady             16m                node-controller  Node ha-218762-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 14m (x2 over 14m)  kubelet          Node ha-218762-m04 has been rebooted, boot id: a0d24f10-73b5-4b9e-ae00-6b857db48ab4
	  Normal   NodeReady                14m (x2 over 14m)  kubelet          Node ha-218762-m04 status is now: NodeReady
	  Normal   NodeNotReady             13m                node-controller  Node ha-218762-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           6m53s              node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   RegisteredNode           5m26s              node-controller  Node ha-218762-m04 event: Registered Node ha-218762-m04 in Controller
	  Normal   Starting                 42s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  42s (x2 over 42s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x2 over 42s)  kubelet          Node ha-218762-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x2 over 42s)  kubelet          Node ha-218762-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 42s                kubelet          Node ha-218762-m04 has been rebooted, boot id: 135d0815-874d-4a4a-95b6-d7944b75cba8
	  Normal   NodeReady                42s                kubelet          Node ha-218762-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098669] kauditd_printk_skb: 51 callbacks suppressed
	[Mar19 19:24] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 19:25] kauditd_printk_skb: 74 callbacks suppressed
	[Mar19 19:35] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	[  +0.163668] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +0.200453] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[  +0.176254] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.314534] systemd-fstab-generator[3782]: Ignoring "noauto" option for root device
	[  +2.399621] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +5.371303] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.623371] kauditd_printk_skb: 98 callbacks suppressed
	[ +37.084386] kauditd_printk_skb: 1 callbacks suppressed
	[Mar19 19:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.825614] kauditd_printk_skb: 1 callbacks suppressed
	[Mar19 19:42] systemd-fstab-generator[6548]: Ignoring "noauto" option for root device
	[  +0.163512] systemd-fstab-generator[6560]: Ignoring "noauto" option for root device
	[  +0.192570] systemd-fstab-generator[6574]: Ignoring "noauto" option for root device
	[  +0.164292] systemd-fstab-generator[6586]: Ignoring "noauto" option for root device
	[  +0.336325] systemd-fstab-generator[6614]: Ignoring "noauto" option for root device
	[Mar19 19:44] systemd-fstab-generator[6752]: Ignoring "noauto" option for root device
	[  +0.089357] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.029694] kauditd_printk_skb: 61 callbacks suppressed
	[ +12.032187] kauditd_printk_skb: 46 callbacks suppressed
	[Mar19 19:45] kauditd_printk_skb: 1 callbacks suppressed
	[Mar19 19:47] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [66fd7c6593d1b1878f8338992a30639313ec454be7c7e5559b117f2dc61a647a] <==
	{"level":"info","ts":"2024-03-19T19:46:06.746867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-03-19T19:46:06.746925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 5"}
	{"level":"info","ts":"2024-03-19T19:46:06.746953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 5"}
	{"level":"info","ts":"2024-03-19T19:46:06.747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 [logterm: 4, index: 3804] sent MsgVote request to d0b3f768cc94194d at term 5"}
	{"level":"info","ts":"2024-03-19T19:46:06.756451Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fe8c4457455e3a5","to":"d0b3f768cc94194d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-19T19:46:06.756507Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:46:06.764661Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fe8c4457455e3a5","to":"d0b3f768cc94194d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-19T19:46:06.764747Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:46:06.772875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from d0b3f768cc94194d at term 5"}
	{"level":"info","ts":"2024-03-19T19:46:06.773003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-03-19T19:46:06.773081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 5"}
	{"level":"info","ts":"2024-03-19T19:46:06.773109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 5"}
	{"level":"info","ts":"2024-03-19T19:46:06.782038Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:ha-218762 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T19:46:06.782488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T19:46:06.783333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T19:46:06.783851Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T19:46:06.783932Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T19:46:06.786878Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-03-19T19:46:06.789366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-03-19T19:46:06.795133Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:35084","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:35084: write: broken pipe"}
	{"level":"warn","ts":"2024-03-19T19:46:06.800203Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:35088","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:35088: write: broken pipe"}
	{"level":"warn","ts":"2024-03-19T19:46:06.805142Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:35098","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:35098: write: broken pipe"}
	{"level":"warn","ts":"2024-03-19T19:46:06.810747Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-19T19:46:06.815647Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-19T19:46:06.821018Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	
	
	==> etcd [89ce3ef06f55e12ae5ed47defffc76cfe083b7b7d48237ed646c18b55dbb35ff] <==
	{"level":"info","ts":"2024-03-19T19:41:17.828229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 [term 3] starts to transfer leadership to d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:41:17.828248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 sends MsgTimeoutNow to d0b3f768cc94194d immediately as d0b3f768cc94194d already has up-to-date log"}
	{"level":"info","ts":"2024-03-19T19:41:17.831058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 [term: 3] received a MsgVote message with higher term from d0b3f768cc94194d [term: 4]"}
	{"level":"info","ts":"2024-03-19T19:41:17.831116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became follower at term 4"}
	{"level":"info","ts":"2024-03-19T19:41:17.831132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 [logterm: 3, index: 3803, vote: 0] cast MsgVote for d0b3f768cc94194d [logterm: 3, index: 3803] at term 4"}
	{"level":"info","ts":"2024-03-19T19:41:17.831146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 lost leader fe8c4457455e3a5 at term 4"}
	{"level":"info","ts":"2024-03-19T19:41:17.833287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader d0b3f768cc94194d at term 4"}
	{"level":"info","ts":"2024-03-19T19:41:17.92904Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"fe8c4457455e3a5","old-leader-member-id":"fe8c4457455e3a5","new-leader-member-id":"d0b3f768cc94194d","took":"100.875347ms"}
	{"level":"info","ts":"2024-03-19T19:41:17.929297Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"warn","ts":"2024-03-19T19:41:17.930401Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:41:17.930459Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"warn","ts":"2024-03-19T19:41:17.931454Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:41:17.931505Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:41:17.931574Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"warn","ts":"2024-03-19T19:41:17.931972Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","error":"context canceled"}
	{"level":"warn","ts":"2024-03-19T19:41:17.93207Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d0b3f768cc94194d","error":"failed to read d0b3f768cc94194d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-19T19:41:17.932109Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"warn","ts":"2024-03-19T19:41:17.93245Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d","error":"context canceled"}
	{"level":"info","ts":"2024-03-19T19:41:17.932498Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fe8c4457455e3a5","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:41:17.932514Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d0b3f768cc94194d"}
	{"level":"info","ts":"2024-03-19T19:41:17.939143Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"warn","ts":"2024-03-19T19:41:17.939382Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.234:53852","server-name":"","error":"read tcp 192.168.39.200:2380->192.168.39.234:53852: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T19:41:17.939432Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.234:53858","server-name":"","error":"read tcp 192.168.39.200:2380->192.168.39.234:53858: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T19:41:18.485211Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-03-19T19:41:18.485287Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-218762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"]}
	
	
	==> kernel <==
	 19:53:15 up 29 min,  0 users,  load average: 0.11, 0.16, 0.23
	Linux ha-218762 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [db0276f991bdb5680479e7c36e60718ab424e15270e003b223ec54300c006be3] <==
	I0319 19:52:29.439451       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:52:39.449609       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:52:39.449660       1 main.go:227] handling current node
	I0319 19:52:39.449676       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:52:39.449682       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:52:39.449902       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:52:39.449936       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:52:49.467109       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:52:49.467158       1 main.go:227] handling current node
	I0319 19:52:49.467177       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:52:49.467183       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:52:49.467311       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:52:49.467343       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:52:59.476458       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:52:59.476614       1 main.go:227] handling current node
	I0319 19:52:59.476659       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:52:59.476687       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:52:59.476927       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:52:59.476977       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	I0319 19:53:09.484105       1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
	I0319 19:53:09.484153       1 main.go:227] handling current node
	I0319 19:53:09.484164       1 main.go:223] Handling node with IPs: map[192.168.39.234:{}]
	I0319 19:53:09.484170       1 main.go:250] Node ha-218762-m02 has CIDR [10.244.1.0/24] 
	I0319 19:53:09.484279       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0319 19:53:09.484317       1 main.go:250] Node ha-218762-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f444266d3832b9383319fb49dced0758e388af0b1a95f5d7756207d43618dee2] <==
	I0319 19:44:35.278195       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0319 19:44:35.278366       1 main.go:107] hostIP = 192.168.39.200
	podIP = 192.168.39.200
	I0319 19:44:35.278578       1 main.go:116] setting mtu 1500 for CNI 
	I0319 19:44:35.280908       1 main.go:146] kindnetd IP family: "ipv4"
	I0319 19:44:35.280995       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0319 19:44:35.599166       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0319 19:44:35.600068       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0319 19:44:36.603428       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0319 19:44:38.604491       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0319 19:44:41.607560       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [7740d2acec0e277ace683db79688e13f8f5d145b699576d2cb9fdf8437be66c9] <==
	I0319 19:45:43.351744       1 options.go:222] external host was not specified, using 192.168.39.200
	I0319 19:45:43.353021       1 server.go:148] Version: v1.29.3
	I0319 19:45:43.353130       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:45:43.632661       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0319 19:45:43.636409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0319 19:45:43.636477       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0319 19:45:43.636715       1 instance.go:297] Using reconciler: lease
	W0319 19:46:03.631103       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0319 19:46:03.633177       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0319 19:46:03.638532       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0319 19:46:03.638534       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ae89f00be21caeb51c3b930f89bd44a92976156935ea9212269c13db55060ab7] <==
	I0319 19:47:28.283891       1 establishing_controller.go:76] Starting EstablishingController
	I0319 19:47:28.283963       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0319 19:47:28.284003       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0319 19:47:28.284022       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0319 19:47:28.283975       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 19:47:28.284032       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 19:47:28.282939       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0319 19:47:28.282927       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0319 19:47:28.362394       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 19:47:28.382255       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 19:47:28.382375       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 19:47:28.384442       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 19:47:28.384527       1 aggregator.go:165] initial CRD sync complete...
	I0319 19:47:28.384535       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 19:47:28.384540       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 19:47:28.384550       1 cache.go:39] Caches are synced for autoregister controller
	I0319 19:47:28.384718       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 19:47:28.385112       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 19:47:28.385173       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0319 19:47:28.386116       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 19:47:28.412450       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 19:47:29.290307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0319 19:47:29.731175       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200 192.168.39.234]
	I0319 19:47:29.732709       1 controller.go:624] quota admission added evaluator for: endpoints
	I0319 19:47:29.739506       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [394f3ed197acab6ce3f6d0f0be1be987a4b19fdaf547fec0b527c418bbb80f99] <==
	I0319 19:47:48.847781       1 shared_informer.go:318] Caches are synced for GC
	I0319 19:47:48.848631       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-218762"
	I0319 19:47:48.849391       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-218762-m02"
	I0319 19:47:48.849466       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-218762-m04"
	I0319 19:47:48.849582       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0319 19:47:48.850133       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0319 19:47:48.854585       1 shared_informer.go:318] Caches are synced for ephemeral
	I0319 19:47:48.895530       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0319 19:47:48.896591       1 shared_informer.go:318] Caches are synced for daemon sets
	I0319 19:47:48.898997       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0319 19:47:48.899545       1 shared_informer.go:318] Caches are synced for deployment
	I0319 19:47:48.901469       1 shared_informer.go:318] Caches are synced for crt configmap
	I0319 19:47:48.917627       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0319 19:47:48.929153       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0319 19:47:48.982495       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 19:47:48.981149       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 19:47:49.047489       1 shared_informer.go:318] Caches are synced for persistent volume
	I0319 19:47:49.415431       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 19:47:49.446066       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 19:47:49.446120       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0319 19:52:32.496410       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-218762-m04"
	I0319 19:52:33.372334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="88.743µs"
	I0319 19:52:33.923765       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7l527" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-7l527"
	I0319 19:52:39.758287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.253769ms"
	I0319 19:52:39.758442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.651µs"
	
	
	==> kube-controller-manager [7d91a329ed1a59fc03e85ea500208e87b966b74742f36bb5b17b30a018d1aeda] <==
	I0319 19:45:42.925350       1 serving.go:380] Generated self-signed cert in-memory
	I0319 19:45:43.102948       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0319 19:45:43.103049       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:45:43.104663       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 19:45:43.104911       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 19:45:43.105272       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0319 19:45:43.105350       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0319 19:46:04.645565       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.200:8443/healthz\": dial tcp 192.168.39.200:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.200:50568->192.168.39.200:8443: read: connection reset by peer"
	
	
	==> kube-proxy [89c6a316576b6b59e8571d3e1ff4b5445b0504e6eab804477b7ad88c70c3536a] <==
	E0319 19:45:29.571742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:35.715185       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0319 19:45:35.715196       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:45:35.715402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:35.715463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:35.715428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:45:38.786406       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:38.786481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:45:44.931837       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:44.931980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:48.002700       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0319 19:45:48.003225       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:48.003299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:45:48.003410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:45:48.003442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:46:00.291389       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0319 19:46:06.434380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:46:06.434409       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:46:06.434469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:46:06.434470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-218762&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0319 19:46:09.507428       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0319 19:46:09.507558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0319 19:46:36.708962       1 shared_informer.go:318] Caches are synced for node config
	I0319 19:46:48.207961       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:46:55.408319       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e64d30502df537d8eb5015d008f97b3dd96575c56db38d8d35437633907d3aec] <==
	I0319 19:35:16.231246       1 server_others.go:72] "Using iptables proxy"
	E0319 19:35:17.922736       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:20.994762       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:24.067129       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:30.212164       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0319 19:35:42.499409       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-218762\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0319 19:36:01.210939       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	I0319 19:36:01.284898       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:36:01.284967       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:36:01.285005       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:36:01.289546       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:36:01.290085       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:36:01.290131       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:36:01.292784       1 config.go:188] "Starting service config controller"
	I0319 19:36:01.292917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:36:01.294451       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:36:01.294490       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:36:01.295482       1 config.go:315] "Starting node config controller"
	I0319 19:36:01.295520       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:36:01.394506       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:36:01.395963       1 shared_informer.go:318] Caches are synced for node config
	I0319 19:36:01.396039       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e004ed7f983d20fe9645cb49a42a208317598a695636a9cb3652bddd18bc1e91] <==
	W0319 19:35:56.245959       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:56.246001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:56.710434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:56.710481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:57.032096       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:57.032202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:57.643428       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:57.643496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.000457       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.000501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.179736       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.179931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:35:58.398246       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:35:58.398328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:36:00.710582       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 19:36:00.710646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0319 19:36:19.987053       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0319 19:38:37.419998       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-d2xjc\": pod busybox-7fdf7869d9-d2xjc is already assigned to node \"ha-218762-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-d2xjc" node="ha-218762-m04"
	E0319 19:38:37.420742       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 2b74c569-a965-4d06-9151-f04ea13408a5(default/busybox-7fdf7869d9-d2xjc) wasn't assumed so cannot be forgotten"
	E0319 19:38:37.421117       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-d2xjc\": pod busybox-7fdf7869d9-d2xjc is already assigned to node \"ha-218762-m04\"" pod="default/busybox-7fdf7869d9-d2xjc"
	I0319 19:38:37.421215       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-d2xjc" node="ha-218762-m04"
	I0319 19:41:17.746946       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0319 19:41:17.747049       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 19:41:17.747354       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0319 19:41:17.747610       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f5b941b0f8db6de5d0aa436de34faaade5febdbdbb14c2b2925ec02deb93770e] <==
	W0319 19:46:52.411528       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:46:52.411569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:46:54.354438       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:46:54.354545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.200:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:46:55.224601       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:46:55.224726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.200:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:46:58.018647       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.200:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:46:58.018740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.200:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:02.120179       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:02.120334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:02.668887       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:02.668966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:06.333181       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:06.333265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.200:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:17.150698       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:17.150866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.200:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:17.421689       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.200:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:17.421739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.200:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:20.154982       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:20.155081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.200:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:23.401434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.200:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:23.401534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.200:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	W0319 19:47:26.259595       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.200:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	E0319 19:47:26.259686       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.200:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.200:8443: connect: connection refused
	I0319 19:48:02.779561       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 19:52:15 ha-218762 kubelet[1386]: I0319 19:52:15.133939    1386 scope.go:117] "RemoveContainer" containerID="7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	Mar 19 19:52:15 ha-218762 kubelet[1386]: E0319 19:52:15.146891    1386 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="0f7cf05b18d5bcd1131c842a49dd56723a8d0c0860e90edec747cbd8924a53d5"
	Mar 19 19:52:15 ha-218762 kubelet[1386]: E0319 19:52:15.147063    1386 kuberuntime_manager.go:1262] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-55wwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagat
ion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},St
din:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-76f75df574-zlz9l_kube-system(5fd420b7-5377-4b53-b5c3-4e785436bd9e): CreateContainerError: the container name "k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use
	Mar 19 19:52:15 ha-218762 kubelet[1386]: E0319 19:52:15.147133    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"the container name \\\"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\\\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/coredns-76f75df574-zlz9l" podUID="5fd420b7-5377-4b53-b5c3-4e785436bd9e"
	Mar 19 19:52:28 ha-218762 kubelet[1386]: I0319 19:52:28.133682    1386 scope.go:117] "RemoveContainer" containerID="7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	Mar 19 19:52:28 ha-218762 kubelet[1386]: E0319 19:52:28.144009    1386 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="0f7cf05b18d5bcd1131c842a49dd56723a8d0c0860e90edec747cbd8924a53d5"
	Mar 19 19:52:28 ha-218762 kubelet[1386]: E0319 19:52:28.144467    1386 kuberuntime_manager.go:1262] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-55wwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagat
ion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},St
din:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-76f75df574-zlz9l_kube-system(5fd420b7-5377-4b53-b5c3-4e785436bd9e): CreateContainerError: the container name "k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use
	Mar 19 19:52:28 ha-218762 kubelet[1386]: E0319 19:52:28.144873    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"the container name \\\"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\\\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/coredns-76f75df574-zlz9l" podUID="5fd420b7-5377-4b53-b5c3-4e785436bd9e"
	Mar 19 19:52:39 ha-218762 kubelet[1386]: I0319 19:52:39.134235    1386 scope.go:117] "RemoveContainer" containerID="7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	Mar 19 19:52:39 ha-218762 kubelet[1386]: E0319 19:52:39.151256    1386 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="0f7cf05b18d5bcd1131c842a49dd56723a8d0c0860e90edec747cbd8924a53d5"
	Mar 19 19:52:39 ha-218762 kubelet[1386]: E0319 19:52:39.151410    1386 kuberuntime_manager.go:1262] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-55wwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagat
ion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},St
din:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-76f75df574-zlz9l_kube-system(5fd420b7-5377-4b53-b5c3-4e785436bd9e): CreateContainerError: the container name "k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use
	Mar 19 19:52:39 ha-218762 kubelet[1386]: E0319 19:52:39.151490    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"the container name \\\"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\\\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/coredns-76f75df574-zlz9l" podUID="5fd420b7-5377-4b53-b5c3-4e785436bd9e"
	Mar 19 19:52:52 ha-218762 kubelet[1386]: I0319 19:52:52.134648    1386 scope.go:117] "RemoveContainer" containerID="7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	Mar 19 19:52:52 ha-218762 kubelet[1386]: E0319 19:52:52.150273    1386 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="0f7cf05b18d5bcd1131c842a49dd56723a8d0c0860e90edec747cbd8924a53d5"
	Mar 19 19:52:52 ha-218762 kubelet[1386]: E0319 19:52:52.150535    1386 kuberuntime_manager.go:1262] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-55wwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagat
ion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},St
din:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-76f75df574-zlz9l_kube-system(5fd420b7-5377-4b53-b5c3-4e785436bd9e): CreateContainerError: the container name "k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use
	Mar 19 19:52:52 ha-218762 kubelet[1386]: E0319 19:52:52.150639    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"the container name \\\"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\\\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/coredns-76f75df574-zlz9l" podUID="5fd420b7-5377-4b53-b5c3-4e785436bd9e"
	Mar 19 19:52:56 ha-218762 kubelet[1386]: E0319 19:52:56.166890    1386 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 19:52:56 ha-218762 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 19:52:56 ha-218762 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 19:52:56 ha-218762 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 19:52:56 ha-218762 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 19:53:07 ha-218762 kubelet[1386]: I0319 19:53:07.134301    1386 scope.go:117] "RemoveContainer" containerID="7fe718f015a0678406b5f1f78bb570dd112f5f0ad969cafa444b0aa28235eb47"
	Mar 19 19:53:07 ha-218762 kubelet[1386]: E0319 19:53:07.142065    1386 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="0f7cf05b18d5bcd1131c842a49dd56723a8d0c0860e90edec747cbd8924a53d5"
	Mar 19 19:53:07 ha-218762 kubelet[1386]: E0319 19:53:07.142216    1386 kuberuntime_manager.go:1262] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-55wwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagat
ion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},St
din:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-76f75df574-zlz9l_kube-system(5fd420b7-5377-4b53-b5c3-4e785436bd9e): CreateContainerError: the container name "k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use
	Mar 19 19:53:07 ha-218762 kubelet[1386]: E0319 19:53:07.142278    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"the container name \\\"k8s_coredns_coredns-76f75df574-zlz9l_kube-system_5fd420b7-5377-4b53-b5c3-4e785436bd9e_2\\\" is already in use by caeef20bbee8a647758136c7d0cc3ffa59f4105d6130aaad7fa555ccfc92f558. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/coredns-76f75df574-zlz9l" podUID="5fd420b7-5377-4b53-b5c3-4e785436bd9e"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 19:53:13.913676   37273 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18453-10028/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-218762 -n ha-218762
helpers_test.go:261: (dbg) Run:  kubectl --context ha-218762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (719.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-695944
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-695944
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-695944: exit status 82 (2m2.687938743s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-695944-m03"  ...
	* Stopping node "multinode-695944-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-695944" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695944 --wait=true -v=8 --alsologtostderr
E0319 20:04:30.843917   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 20:05:04.834749   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695944 --wait=true -v=8 --alsologtostderr: (3m5.125522574s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-695944
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-695944 -n multinode-695944
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-695944 logs -n 25: (1.674094368s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3232251347/001/cp-test_multinode-695944-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944:/home/docker/cp-test_multinode-695944-m02_multinode-695944.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944 sudo cat                                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m02_multinode-695944.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03:/home/docker/cp-test_multinode-695944-m02_multinode-695944-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944-m03 sudo cat                                   | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m02_multinode-695944-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp testdata/cp-test.txt                                                | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3232251347/001/cp-test_multinode-695944-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944:/home/docker/cp-test_multinode-695944-m03_multinode-695944.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944 sudo cat                                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m03_multinode-695944.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02:/home/docker/cp-test_multinode-695944-m03_multinode-695944-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944-m02 sudo cat                                   | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m03_multinode-695944-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-695944 node stop m03                                                          | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	| node    | multinode-695944 node start                                                             | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-695944                                                                | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:01 UTC |                     |
	| stop    | -p multinode-695944                                                                     | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:01 UTC |                     |
	| start   | -p multinode-695944                                                                     | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:03 UTC | 19 Mar 24 20:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-695944                                                                | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:03:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:03:25.886561   43680 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:03:25.886695   43680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:03:25.886706   43680 out.go:304] Setting ErrFile to fd 2...
	I0319 20:03:25.886712   43680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:03:25.886910   43680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:03:25.887434   43680 out.go:298] Setting JSON to false
	I0319 20:03:25.888293   43680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6304,"bootTime":1710872302,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:03:25.888352   43680 start.go:139] virtualization: kvm guest
	I0319 20:03:25.891252   43680 out.go:177] * [multinode-695944] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:03:25.892797   43680 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:03:25.894190   43680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:03:25.892803   43680 notify.go:220] Checking for updates...
	I0319 20:03:25.896688   43680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:03:25.898039   43680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:03:25.899376   43680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:03:25.900681   43680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:03:25.902226   43680 config.go:182] Loaded profile config "multinode-695944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:03:25.902306   43680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:03:25.902770   43680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:03:25.902831   43680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:03:25.917444   43680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0319 20:03:25.917826   43680 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:03:25.918341   43680 main.go:141] libmachine: Using API Version  1
	I0319 20:03:25.918377   43680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:03:25.918738   43680 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:03:25.918920   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:03:25.952640   43680 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:03:25.953926   43680 start.go:297] selected driver: kvm2
	I0319 20:03:25.953936   43680 start.go:901] validating driver "kvm2" against &{Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:03:25.954087   43680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:03:25.954403   43680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:03:25.954474   43680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:03:25.968121   43680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:03:25.968852   43680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:03:25.968915   43680 cni.go:84] Creating CNI manager for ""
	I0319 20:03:25.968927   43680 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 20:03:25.968977   43680 start.go:340] cluster config:
	{Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-695944 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:03:25.969087   43680 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:03:25.971494   43680 out.go:177] * Starting "multinode-695944" primary control-plane node in "multinode-695944" cluster
	I0319 20:03:25.972600   43680 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:03:25.972625   43680 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:03:25.972634   43680 cache.go:56] Caching tarball of preloaded images
	I0319 20:03:25.972714   43680 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:03:25.972728   43680 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:03:25.972839   43680 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/config.json ...
	I0319 20:03:25.973023   43680 start.go:360] acquireMachinesLock for multinode-695944: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:03:25.973062   43680 start.go:364] duration metric: took 21.995µs to acquireMachinesLock for "multinode-695944"
	I0319 20:03:25.973088   43680 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:03:25.973096   43680 fix.go:54] fixHost starting: 
	I0319 20:03:25.973336   43680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:03:25.973367   43680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:03:25.986666   43680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0319 20:03:25.987106   43680 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:03:25.987623   43680 main.go:141] libmachine: Using API Version  1
	I0319 20:03:25.987645   43680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:03:25.987938   43680 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:03:25.988100   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:03:25.988228   43680 main.go:141] libmachine: (multinode-695944) Calling .GetState
	I0319 20:03:25.989757   43680 fix.go:112] recreateIfNeeded on multinode-695944: state=Running err=<nil>
	W0319 20:03:25.989772   43680 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:03:25.991584   43680 out.go:177] * Updating the running kvm2 "multinode-695944" VM ...
	I0319 20:03:25.992737   43680 machine.go:94] provisionDockerMachine start ...
	I0319 20:03:25.992754   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:03:25.992935   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:25.995446   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:25.995895   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:25.995924   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:25.996065   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:25.996232   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:25.996433   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:25.996588   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:25.996743   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:25.996962   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:25.996979   43680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:03:26.102360   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-695944
	
	I0319 20:03:26.102382   43680 main.go:141] libmachine: (multinode-695944) Calling .GetMachineName
	I0319 20:03:26.102610   43680 buildroot.go:166] provisioning hostname "multinode-695944"
	I0319 20:03:26.102630   43680 main.go:141] libmachine: (multinode-695944) Calling .GetMachineName
	I0319 20:03:26.102815   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.105344   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.105646   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.105682   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.105834   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.106028   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.106173   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.106342   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.106494   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:26.106664   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:26.106688   43680 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-695944 && echo "multinode-695944" | sudo tee /etc/hostname
	I0319 20:03:26.232012   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-695944
	
	I0319 20:03:26.232039   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.234935   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.235315   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.235345   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.235489   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.235691   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.235840   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.235983   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.236131   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:26.236318   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:26.236336   43680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-695944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-695944/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-695944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:03:26.342162   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:03:26.342187   43680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:03:26.342229   43680 buildroot.go:174] setting up certificates
	I0319 20:03:26.342240   43680 provision.go:84] configureAuth start
	I0319 20:03:26.342252   43680 main.go:141] libmachine: (multinode-695944) Calling .GetMachineName
	I0319 20:03:26.342507   43680 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:03:26.345321   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.345726   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.345764   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.345948   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.348177   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.348503   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.348551   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.348644   43680 provision.go:143] copyHostCerts
	I0319 20:03:26.348676   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:03:26.348713   43680 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:03:26.348724   43680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:03:26.348807   43680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:03:26.348948   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:03:26.348978   43680 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:03:26.348988   43680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:03:26.349036   43680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:03:26.349121   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:03:26.349147   43680 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:03:26.349157   43680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:03:26.349193   43680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:03:26.349276   43680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.multinode-695944 san=[127.0.0.1 192.168.39.64 localhost minikube multinode-695944]
	I0319 20:03:26.420765   43680 provision.go:177] copyRemoteCerts
	I0319 20:03:26.420820   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:03:26.420841   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.423425   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.423797   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.423838   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.424024   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.424197   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.424363   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.424489   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:03:26.507309   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 20:03:26.507386   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:03:26.538596   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 20:03:26.538663   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0319 20:03:26.573064   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 20:03:26.573133   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:03:26.607891   43680 provision.go:87] duration metric: took 265.639005ms to configureAuth
	I0319 20:03:26.607915   43680 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:03:26.608111   43680 config.go:182] Loaded profile config "multinode-695944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:03:26.608179   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.610532   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.610945   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.610982   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.611180   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.611369   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.611525   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.611671   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.611828   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:26.611999   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:26.612013   43680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:04:57.508070   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:04:57.508101   43680 machine.go:97] duration metric: took 1m31.51535123s to provisionDockerMachine
	I0319 20:04:57.508115   43680 start.go:293] postStartSetup for "multinode-695944" (driver="kvm2")
	I0319 20:04:57.508126   43680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:04:57.508141   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.508509   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:04:57.508542   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.511636   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.512040   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.512068   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.512235   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.512446   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.512602   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.512730   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:04:57.597580   43680 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:04:57.602234   43680 command_runner.go:130] > NAME=Buildroot
	I0319 20:04:57.602259   43680 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0319 20:04:57.602265   43680 command_runner.go:130] > ID=buildroot
	I0319 20:04:57.602273   43680 command_runner.go:130] > VERSION_ID=2023.02.9
	I0319 20:04:57.602280   43680 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0319 20:04:57.602323   43680 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:04:57.602339   43680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:04:57.602406   43680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:04:57.602486   43680 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:04:57.602495   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 20:04:57.602572   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:04:57.613637   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:04:57.640528   43680 start.go:296] duration metric: took 132.401619ms for postStartSetup
	I0319 20:04:57.640566   43680 fix.go:56] duration metric: took 1m31.667468548s for fixHost
	I0319 20:04:57.640586   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.642936   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.643285   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.643305   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.643472   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.643679   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.643854   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.644026   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.644180   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:04:57.644429   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:04:57.644441   43680 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:04:57.745664   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710878697.731461853
	
	I0319 20:04:57.745682   43680 fix.go:216] guest clock: 1710878697.731461853
	I0319 20:04:57.745691   43680 fix.go:229] Guest: 2024-03-19 20:04:57.731461853 +0000 UTC Remote: 2024-03-19 20:04:57.640571222 +0000 UTC m=+91.803000366 (delta=90.890631ms)
	I0319 20:04:57.745713   43680 fix.go:200] guest clock delta is within tolerance: 90.890631ms
	I0319 20:04:57.745720   43680 start.go:83] releasing machines lock for "multinode-695944", held for 1m31.772646971s
	I0319 20:04:57.745743   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.746027   43680 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:04:57.748331   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.748745   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.748763   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.748956   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.749589   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.749791   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.749890   43680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:04:57.749949   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.750022   43680 ssh_runner.go:195] Run: cat /version.json
	I0319 20:04:57.750050   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.752558   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.752828   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.752910   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.752937   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.753069   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.753186   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.753209   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.753211   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.753332   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.753411   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.753482   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.753536   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:04:57.753624   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.753754   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:04:57.829987   43680 command_runner.go:130] > {"iso_version": "v1.32.1-1710573846-18277", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "c68f4945cc664fefa1b332c623244b57043707c8"}
	I0319 20:04:57.830331   43680 ssh_runner.go:195] Run: systemctl --version
	I0319 20:04:57.858563   43680 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0319 20:04:57.859285   43680 command_runner.go:130] > systemd 252 (252)
	I0319 20:04:57.859323   43680 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0319 20:04:57.859392   43680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:04:58.025916   43680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0319 20:04:58.032768   43680 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0319 20:04:58.032900   43680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:04:58.032973   43680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:04:58.043001   43680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 20:04:58.043018   43680 start.go:494] detecting cgroup driver to use...
	I0319 20:04:58.043102   43680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:04:58.061710   43680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:04:58.076639   43680 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:04:58.076689   43680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:04:58.091656   43680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:04:58.105954   43680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:04:58.254755   43680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:04:58.401134   43680 docker.go:233] disabling docker service ...
	I0319 20:04:58.401204   43680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:04:58.419327   43680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:04:58.434180   43680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:04:58.600032   43680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:04:58.747776   43680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:04:58.764054   43680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:04:58.786440   43680 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0319 20:04:58.787047   43680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:04:58.787109   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.798527   43680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:04:58.798580   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.809450   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.820430   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.831208   43680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:04:58.842670   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.853874   43680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.866627   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.878222   43680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:04:58.888969   43680 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0319 20:04:58.889025   43680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:04:58.899782   43680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:04:59.041999   43680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:04:59.307769   43680 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:04:59.307846   43680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:04:59.314127   43680 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0319 20:04:59.314148   43680 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0319 20:04:59.314154   43680 command_runner.go:130] > Device: 0,22	Inode: 1314        Links: 1
	I0319 20:04:59.314161   43680 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0319 20:04:59.314166   43680 command_runner.go:130] > Access: 2024-03-19 20:04:59.175273304 +0000
	I0319 20:04:59.314178   43680 command_runner.go:130] > Modify: 2024-03-19 20:04:59.175273304 +0000
	I0319 20:04:59.314183   43680 command_runner.go:130] > Change: 2024-03-19 20:04:59.175273304 +0000
	I0319 20:04:59.314187   43680 command_runner.go:130] >  Birth: -
	I0319 20:04:59.314423   43680 start.go:562] Will wait 60s for crictl version
	I0319 20:04:59.314491   43680 ssh_runner.go:195] Run: which crictl
	I0319 20:04:59.318799   43680 command_runner.go:130] > /usr/bin/crictl
	I0319 20:04:59.319024   43680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:04:59.362144   43680 command_runner.go:130] > Version:  0.1.0
	I0319 20:04:59.362243   43680 command_runner.go:130] > RuntimeName:  cri-o
	I0319 20:04:59.362329   43680 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0319 20:04:59.362555   43680 command_runner.go:130] > RuntimeApiVersion:  v1
	I0319 20:04:59.363955   43680 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:04:59.364023   43680 ssh_runner.go:195] Run: crio --version
	I0319 20:04:59.395902   43680 command_runner.go:130] > crio version 1.29.1
	I0319 20:04:59.395940   43680 command_runner.go:130] > Version:        1.29.1
	I0319 20:04:59.395946   43680 command_runner.go:130] > GitCommit:      unknown
	I0319 20:04:59.395950   43680 command_runner.go:130] > GitCommitDate:  unknown
	I0319 20:04:59.395954   43680 command_runner.go:130] > GitTreeState:   clean
	I0319 20:04:59.395964   43680 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0319 20:04:59.395967   43680 command_runner.go:130] > GoVersion:      go1.21.6
	I0319 20:04:59.395971   43680 command_runner.go:130] > Compiler:       gc
	I0319 20:04:59.395976   43680 command_runner.go:130] > Platform:       linux/amd64
	I0319 20:04:59.395980   43680 command_runner.go:130] > Linkmode:       dynamic
	I0319 20:04:59.395984   43680 command_runner.go:130] > BuildTags:      
	I0319 20:04:59.395989   43680 command_runner.go:130] >   containers_image_ostree_stub
	I0319 20:04:59.395993   43680 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0319 20:04:59.395997   43680 command_runner.go:130] >   btrfs_noversion
	I0319 20:04:59.396004   43680 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0319 20:04:59.396008   43680 command_runner.go:130] >   libdm_no_deferred_remove
	I0319 20:04:59.396015   43680 command_runner.go:130] >   seccomp
	I0319 20:04:59.396019   43680 command_runner.go:130] > LDFlags:          unknown
	I0319 20:04:59.396022   43680 command_runner.go:130] > SeccompEnabled:   true
	I0319 20:04:59.396026   43680 command_runner.go:130] > AppArmorEnabled:  false
	I0319 20:04:59.397180   43680 ssh_runner.go:195] Run: crio --version
	I0319 20:04:59.431558   43680 command_runner.go:130] > crio version 1.29.1
	I0319 20:04:59.431583   43680 command_runner.go:130] > Version:        1.29.1
	I0319 20:04:59.431588   43680 command_runner.go:130] > GitCommit:      unknown
	I0319 20:04:59.431593   43680 command_runner.go:130] > GitCommitDate:  unknown
	I0319 20:04:59.431596   43680 command_runner.go:130] > GitTreeState:   clean
	I0319 20:04:59.431602   43680 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0319 20:04:59.431606   43680 command_runner.go:130] > GoVersion:      go1.21.6
	I0319 20:04:59.431609   43680 command_runner.go:130] > Compiler:       gc
	I0319 20:04:59.431614   43680 command_runner.go:130] > Platform:       linux/amd64
	I0319 20:04:59.431618   43680 command_runner.go:130] > Linkmode:       dynamic
	I0319 20:04:59.431622   43680 command_runner.go:130] > BuildTags:      
	I0319 20:04:59.431627   43680 command_runner.go:130] >   containers_image_ostree_stub
	I0319 20:04:59.431631   43680 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0319 20:04:59.431634   43680 command_runner.go:130] >   btrfs_noversion
	I0319 20:04:59.431638   43680 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0319 20:04:59.431642   43680 command_runner.go:130] >   libdm_no_deferred_remove
	I0319 20:04:59.431645   43680 command_runner.go:130] >   seccomp
	I0319 20:04:59.431667   43680 command_runner.go:130] > LDFlags:          unknown
	I0319 20:04:59.431672   43680 command_runner.go:130] > SeccompEnabled:   true
	I0319 20:04:59.431676   43680 command_runner.go:130] > AppArmorEnabled:  false
	I0319 20:04:59.435589   43680 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:04:59.436851   43680 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:04:59.439250   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:59.439572   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:59.439599   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:59.439750   43680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:04:59.444744   43680 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0319 20:04:59.444830   43680 kubeadm.go:877] updating cluster {Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:04:59.445010   43680 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:04:59.445066   43680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:04:59.498969   43680 command_runner.go:130] > {
	I0319 20:04:59.498995   43680 command_runner.go:130] >   "images": [
	I0319 20:04:59.499002   43680 command_runner.go:130] >     {
	I0319 20:04:59.499014   43680 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0319 20:04:59.499020   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499026   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0319 20:04:59.499030   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499034   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499043   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0319 20:04:59.499050   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0319 20:04:59.499056   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499060   43680 command_runner.go:130] >       "size": "65291810",
	I0319 20:04:59.499064   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499070   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499088   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499099   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499106   43680 command_runner.go:130] >     },
	I0319 20:04:59.499114   43680 command_runner.go:130] >     {
	I0319 20:04:59.499121   43680 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0319 20:04:59.499127   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499132   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0319 20:04:59.499138   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499142   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499149   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0319 20:04:59.499174   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0319 20:04:59.499183   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499191   43680 command_runner.go:130] >       "size": "1363676",
	I0319 20:04:59.499203   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499217   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499225   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499229   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499235   43680 command_runner.go:130] >     },
	I0319 20:04:59.499239   43680 command_runner.go:130] >     {
	I0319 20:04:59.499248   43680 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0319 20:04:59.499258   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499271   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0319 20:04:59.499279   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499289   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499304   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0319 20:04:59.499318   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0319 20:04:59.499324   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499330   43680 command_runner.go:130] >       "size": "31470524",
	I0319 20:04:59.499336   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499346   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499356   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499371   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499379   43680 command_runner.go:130] >     },
	I0319 20:04:59.499385   43680 command_runner.go:130] >     {
	I0319 20:04:59.499398   43680 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0319 20:04:59.499406   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499412   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0319 20:04:59.499420   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499427   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499444   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0319 20:04:59.499466   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0319 20:04:59.499475   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499485   43680 command_runner.go:130] >       "size": "61245718",
	I0319 20:04:59.499493   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499498   43680 command_runner.go:130] >       "username": "nonroot",
	I0319 20:04:59.499505   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499523   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499532   43680 command_runner.go:130] >     },
	I0319 20:04:59.499538   43680 command_runner.go:130] >     {
	I0319 20:04:59.499551   43680 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0319 20:04:59.499561   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499571   43680 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0319 20:04:59.499579   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499583   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499593   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0319 20:04:59.499608   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0319 20:04:59.499616   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499627   43680 command_runner.go:130] >       "size": "150779692",
	I0319 20:04:59.499637   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.499646   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.499655   43680 command_runner.go:130] >       },
	I0319 20:04:59.499662   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499669   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499683   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499693   43680 command_runner.go:130] >     },
	I0319 20:04:59.499702   43680 command_runner.go:130] >     {
	I0319 20:04:59.499714   43680 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0319 20:04:59.499724   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499735   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0319 20:04:59.499744   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499752   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499763   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0319 20:04:59.499779   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0319 20:04:59.499789   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499798   43680 command_runner.go:130] >       "size": "128508878",
	I0319 20:04:59.499807   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.499817   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.499826   43680 command_runner.go:130] >       },
	I0319 20:04:59.499835   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499840   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499847   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499852   43680 command_runner.go:130] >     },
	I0319 20:04:59.499866   43680 command_runner.go:130] >     {
	I0319 20:04:59.499880   43680 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0319 20:04:59.499889   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499901   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0319 20:04:59.499910   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499917   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499933   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0319 20:04:59.499949   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0319 20:04:59.499958   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499968   43680 command_runner.go:130] >       "size": "123142962",
	I0319 20:04:59.499977   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.499987   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.499996   43680 command_runner.go:130] >       },
	I0319 20:04:59.500003   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500012   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500019   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.500023   43680 command_runner.go:130] >     },
	I0319 20:04:59.500031   43680 command_runner.go:130] >     {
	I0319 20:04:59.500043   43680 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0319 20:04:59.500052   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.500064   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0319 20:04:59.500072   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500082   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.500104   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0319 20:04:59.500119   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0319 20:04:59.500128   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500136   43680 command_runner.go:130] >       "size": "83634073",
	I0319 20:04:59.500145   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.500154   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500161   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500168   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.500173   43680 command_runner.go:130] >     },
	I0319 20:04:59.500178   43680 command_runner.go:130] >     {
	I0319 20:04:59.500187   43680 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0319 20:04:59.500191   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.500196   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0319 20:04:59.500207   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500218   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.500233   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0319 20:04:59.500247   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0319 20:04:59.500255   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500274   43680 command_runner.go:130] >       "size": "60724018",
	I0319 20:04:59.500283   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.500293   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.500301   43680 command_runner.go:130] >       },
	I0319 20:04:59.500307   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500316   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500326   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.500331   43680 command_runner.go:130] >     },
	I0319 20:04:59.500337   43680 command_runner.go:130] >     {
	I0319 20:04:59.500346   43680 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0319 20:04:59.500355   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.500371   43680 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0319 20:04:59.500380   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500386   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.500401   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0319 20:04:59.500414   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0319 20:04:59.500422   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500431   43680 command_runner.go:130] >       "size": "750414",
	I0319 20:04:59.500441   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.500451   43680 command_runner.go:130] >         "value": "65535"
	I0319 20:04:59.500460   43680 command_runner.go:130] >       },
	I0319 20:04:59.500469   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500476   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500485   43680 command_runner.go:130] >       "pinned": true
	I0319 20:04:59.500493   43680 command_runner.go:130] >     }
	I0319 20:04:59.500499   43680 command_runner.go:130] >   ]
	I0319 20:04:59.500506   43680 command_runner.go:130] > }
	I0319 20:04:59.500728   43680 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:04:59.500743   43680 crio.go:433] Images already preloaded, skipping extraction
	I0319 20:04:59.500789   43680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:04:59.548727   43680 command_runner.go:130] > {
	I0319 20:04:59.548748   43680 command_runner.go:130] >   "images": [
	I0319 20:04:59.548754   43680 command_runner.go:130] >     {
	I0319 20:04:59.548769   43680 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0319 20:04:59.548775   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.548784   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0319 20:04:59.548791   43680 command_runner.go:130] >       ],
	I0319 20:04:59.548798   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.548816   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0319 20:04:59.548832   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0319 20:04:59.548839   43680 command_runner.go:130] >       ],
	I0319 20:04:59.548848   43680 command_runner.go:130] >       "size": "65291810",
	I0319 20:04:59.548855   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.548863   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.548886   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.548896   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.548902   43680 command_runner.go:130] >     },
	I0319 20:04:59.548909   43680 command_runner.go:130] >     {
	I0319 20:04:59.548919   43680 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0319 20:04:59.548935   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.548947   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0319 20:04:59.548953   43680 command_runner.go:130] >       ],
	I0319 20:04:59.548961   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.548974   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0319 20:04:59.548989   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0319 20:04:59.548996   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549005   43680 command_runner.go:130] >       "size": "1363676",
	I0319 20:04:59.549011   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.549024   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549043   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549054   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549061   43680 command_runner.go:130] >     },
	I0319 20:04:59.549067   43680 command_runner.go:130] >     {
	I0319 20:04:59.549077   43680 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0319 20:04:59.549087   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549096   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0319 20:04:59.549105   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549113   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549129   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0319 20:04:59.549146   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0319 20:04:59.549154   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549162   43680 command_runner.go:130] >       "size": "31470524",
	I0319 20:04:59.549172   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.549181   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549190   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549197   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549206   43680 command_runner.go:130] >     },
	I0319 20:04:59.549212   43680 command_runner.go:130] >     {
	I0319 20:04:59.549226   43680 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0319 20:04:59.549235   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549245   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0319 20:04:59.549253   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549261   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549276   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0319 20:04:59.549302   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0319 20:04:59.549312   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549319   43680 command_runner.go:130] >       "size": "61245718",
	I0319 20:04:59.549326   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.549337   43680 command_runner.go:130] >       "username": "nonroot",
	I0319 20:04:59.549350   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549361   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549368   43680 command_runner.go:130] >     },
	I0319 20:04:59.549376   43680 command_runner.go:130] >     {
	I0319 20:04:59.549385   43680 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0319 20:04:59.549392   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549410   43680 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0319 20:04:59.549419   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549427   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549442   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0319 20:04:59.549458   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0319 20:04:59.549467   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549475   43680 command_runner.go:130] >       "size": "150779692",
	I0319 20:04:59.549484   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.549490   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.549499   43680 command_runner.go:130] >       },
	I0319 20:04:59.549505   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549513   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549523   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549531   43680 command_runner.go:130] >     },
	I0319 20:04:59.549539   43680 command_runner.go:130] >     {
	I0319 20:04:59.549551   43680 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0319 20:04:59.549560   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549569   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0319 20:04:59.549577   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549585   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549601   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0319 20:04:59.549617   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0319 20:04:59.549626   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549633   43680 command_runner.go:130] >       "size": "128508878",
	I0319 20:04:59.549641   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.549648   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.549657   43680 command_runner.go:130] >       },
	I0319 20:04:59.549664   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549675   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549685   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549691   43680 command_runner.go:130] >     },
	I0319 20:04:59.549699   43680 command_runner.go:130] >     {
	I0319 20:04:59.549710   43680 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0319 20:04:59.549723   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549735   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0319 20:04:59.549742   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549758   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549775   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0319 20:04:59.549790   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0319 20:04:59.549802   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549812   43680 command_runner.go:130] >       "size": "123142962",
	I0319 20:04:59.549819   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.549829   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.549838   43680 command_runner.go:130] >       },
	I0319 20:04:59.549845   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549855   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549863   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549870   43680 command_runner.go:130] >     },
	I0319 20:04:59.549883   43680 command_runner.go:130] >     {
	I0319 20:04:59.549896   43680 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0319 20:04:59.549903   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549912   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0319 20:04:59.549922   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549929   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549960   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0319 20:04:59.549976   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0319 20:04:59.549985   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549992   43680 command_runner.go:130] >       "size": "83634073",
	I0319 20:04:59.550005   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.550015   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.550022   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.550031   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.550038   43680 command_runner.go:130] >     },
	I0319 20:04:59.550047   43680 command_runner.go:130] >     {
	I0319 20:04:59.550058   43680 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0319 20:04:59.550068   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.550077   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0319 20:04:59.550085   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550092   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.550108   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0319 20:04:59.550124   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0319 20:04:59.550132   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550146   43680 command_runner.go:130] >       "size": "60724018",
	I0319 20:04:59.550156   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.550163   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.550171   43680 command_runner.go:130] >       },
	I0319 20:04:59.550178   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.550187   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.550196   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.550203   43680 command_runner.go:130] >     },
	I0319 20:04:59.550211   43680 command_runner.go:130] >     {
	I0319 20:04:59.550223   43680 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0319 20:04:59.550232   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.550240   43680 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0319 20:04:59.550248   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550255   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.550270   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0319 20:04:59.550288   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0319 20:04:59.550297   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550305   43680 command_runner.go:130] >       "size": "750414",
	I0319 20:04:59.550315   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.550323   43680 command_runner.go:130] >         "value": "65535"
	I0319 20:04:59.550331   43680 command_runner.go:130] >       },
	I0319 20:04:59.550338   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.550348   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.550357   43680 command_runner.go:130] >       "pinned": true
	I0319 20:04:59.550366   43680 command_runner.go:130] >     }
	I0319 20:04:59.550373   43680 command_runner.go:130] >   ]
	I0319 20:04:59.550378   43680 command_runner.go:130] > }
	I0319 20:04:59.550495   43680 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:04:59.550507   43680 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:04:59.550516   43680 kubeadm.go:928] updating node { 192.168.39.64 8443 v1.29.3 crio true true} ...
	I0319 20:04:59.550648   43680 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-695944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:04:59.550722   43680 ssh_runner.go:195] Run: crio config
	I0319 20:04:59.585482   43680 command_runner.go:130] ! time="2024-03-19 20:04:59.571408956Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0319 20:04:59.592615   43680 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0319 20:04:59.602630   43680 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0319 20:04:59.602645   43680 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0319 20:04:59.602652   43680 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0319 20:04:59.602655   43680 command_runner.go:130] > #
	I0319 20:04:59.602662   43680 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0319 20:04:59.602668   43680 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0319 20:04:59.602673   43680 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0319 20:04:59.602682   43680 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0319 20:04:59.602685   43680 command_runner.go:130] > # reload'.
	I0319 20:04:59.602692   43680 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0319 20:04:59.602701   43680 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0319 20:04:59.602707   43680 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0319 20:04:59.602713   43680 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0319 20:04:59.602725   43680 command_runner.go:130] > [crio]
	I0319 20:04:59.602732   43680 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0319 20:04:59.602736   43680 command_runner.go:130] > # containers images, in this directory.
	I0319 20:04:59.602743   43680 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0319 20:04:59.602754   43680 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0319 20:04:59.602766   43680 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0319 20:04:59.602773   43680 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0319 20:04:59.602777   43680 command_runner.go:130] > # imagestore = ""
	I0319 20:04:59.602785   43680 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0319 20:04:59.602795   43680 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0319 20:04:59.602801   43680 command_runner.go:130] > storage_driver = "overlay"
	I0319 20:04:59.602810   43680 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0319 20:04:59.602819   43680 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0319 20:04:59.602839   43680 command_runner.go:130] > storage_option = [
	I0319 20:04:59.602844   43680 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0319 20:04:59.602847   43680 command_runner.go:130] > ]
	I0319 20:04:59.602854   43680 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0319 20:04:59.602863   43680 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0319 20:04:59.602876   43680 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0319 20:04:59.602888   43680 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0319 20:04:59.602899   43680 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0319 20:04:59.602909   43680 command_runner.go:130] > # always happen on a node reboot
	I0319 20:04:59.602917   43680 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0319 20:04:59.602941   43680 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0319 20:04:59.602952   43680 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0319 20:04:59.602960   43680 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0319 20:04:59.602965   43680 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0319 20:04:59.602974   43680 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0319 20:04:59.602985   43680 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0319 20:04:59.602989   43680 command_runner.go:130] > # internal_wipe = true
	I0319 20:04:59.603001   43680 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0319 20:04:59.603015   43680 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0319 20:04:59.603023   43680 command_runner.go:130] > # internal_repair = false
	I0319 20:04:59.603035   43680 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0319 20:04:59.603047   43680 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0319 20:04:59.603058   43680 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0319 20:04:59.603070   43680 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0319 20:04:59.603082   43680 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0319 20:04:59.603088   43680 command_runner.go:130] > [crio.api]
	I0319 20:04:59.603093   43680 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0319 20:04:59.603102   43680 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0319 20:04:59.603114   43680 command_runner.go:130] > # IP address on which the stream server will listen.
	I0319 20:04:59.603125   43680 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0319 20:04:59.603138   43680 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0319 20:04:59.603149   43680 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0319 20:04:59.603159   43680 command_runner.go:130] > # stream_port = "0"
	I0319 20:04:59.603167   43680 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0319 20:04:59.603175   43680 command_runner.go:130] > # stream_enable_tls = false
	I0319 20:04:59.603181   43680 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0319 20:04:59.603197   43680 command_runner.go:130] > # stream_idle_timeout = ""
	I0319 20:04:59.603221   43680 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0319 20:04:59.603235   43680 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0319 20:04:59.603244   43680 command_runner.go:130] > # minutes.
	I0319 20:04:59.603251   43680 command_runner.go:130] > # stream_tls_cert = ""
	I0319 20:04:59.603263   43680 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0319 20:04:59.603275   43680 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0319 20:04:59.603283   43680 command_runner.go:130] > # stream_tls_key = ""
	I0319 20:04:59.603289   43680 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0319 20:04:59.603302   43680 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0319 20:04:59.603333   43680 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0319 20:04:59.603343   43680 command_runner.go:130] > # stream_tls_ca = ""
	I0319 20:04:59.603355   43680 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0319 20:04:59.603364   43680 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0319 20:04:59.603377   43680 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0319 20:04:59.603384   43680 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0319 20:04:59.603391   43680 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0319 20:04:59.603402   43680 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0319 20:04:59.603411   43680 command_runner.go:130] > [crio.runtime]
	I0319 20:04:59.603426   43680 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0319 20:04:59.603438   43680 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0319 20:04:59.603447   43680 command_runner.go:130] > # "nofile=1024:2048"
	I0319 20:04:59.603458   43680 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0319 20:04:59.603467   43680 command_runner.go:130] > # default_ulimits = [
	I0319 20:04:59.603472   43680 command_runner.go:130] > # ]
	I0319 20:04:59.603483   43680 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0319 20:04:59.603487   43680 command_runner.go:130] > # no_pivot = false
	I0319 20:04:59.603502   43680 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0319 20:04:59.603516   43680 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0319 20:04:59.603524   43680 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0319 20:04:59.603537   43680 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0319 20:04:59.603548   43680 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0319 20:04:59.603562   43680 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0319 20:04:59.603572   43680 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0319 20:04:59.603582   43680 command_runner.go:130] > # Cgroup setting for conmon
	I0319 20:04:59.603588   43680 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0319 20:04:59.603603   43680 command_runner.go:130] > conmon_cgroup = "pod"
	I0319 20:04:59.603617   43680 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0319 20:04:59.603626   43680 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0319 20:04:59.603639   43680 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0319 20:04:59.603648   43680 command_runner.go:130] > conmon_env = [
	I0319 20:04:59.603658   43680 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0319 20:04:59.603666   43680 command_runner.go:130] > ]
	I0319 20:04:59.603675   43680 command_runner.go:130] > # Additional environment variables to set for all the
	I0319 20:04:59.603684   43680 command_runner.go:130] > # containers. These are overridden if set in the
	I0319 20:04:59.603690   43680 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0319 20:04:59.603698   43680 command_runner.go:130] > # default_env = [
	I0319 20:04:59.603703   43680 command_runner.go:130] > # ]
	I0319 20:04:59.603717   43680 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0319 20:04:59.603729   43680 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0319 20:04:59.603738   43680 command_runner.go:130] > # selinux = false
	I0319 20:04:59.603748   43680 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0319 20:04:59.603761   43680 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0319 20:04:59.603772   43680 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0319 20:04:59.603781   43680 command_runner.go:130] > # seccomp_profile = ""
	I0319 20:04:59.603787   43680 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0319 20:04:59.603798   43680 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0319 20:04:59.603811   43680 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0319 20:04:59.603820   43680 command_runner.go:130] > # which might increase security.
	I0319 20:04:59.603834   43680 command_runner.go:130] > # This option is currently deprecated,
	I0319 20:04:59.603845   43680 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0319 20:04:59.603856   43680 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0319 20:04:59.603871   43680 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0319 20:04:59.603882   43680 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0319 20:04:59.603891   43680 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0319 20:04:59.603904   43680 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0319 20:04:59.603916   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.603924   43680 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0319 20:04:59.603937   43680 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0319 20:04:59.603947   43680 command_runner.go:130] > # the cgroup blockio controller.
	I0319 20:04:59.603955   43680 command_runner.go:130] > # blockio_config_file = ""
	I0319 20:04:59.603968   43680 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0319 20:04:59.603983   43680 command_runner.go:130] > # blockio parameters.
	I0319 20:04:59.603991   43680 command_runner.go:130] > # blockio_reload = false
	I0319 20:04:59.604001   43680 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0319 20:04:59.604011   43680 command_runner.go:130] > # irqbalance daemon.
	I0319 20:04:59.604020   43680 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0319 20:04:59.604034   43680 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0319 20:04:59.604048   43680 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0319 20:04:59.604061   43680 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0319 20:04:59.604073   43680 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0319 20:04:59.604083   43680 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0319 20:04:59.604088   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.604101   43680 command_runner.go:130] > # rdt_config_file = ""
	I0319 20:04:59.604113   43680 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0319 20:04:59.604121   43680 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0319 20:04:59.604160   43680 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0319 20:04:59.604171   43680 command_runner.go:130] > # separate_pull_cgroup = ""
	I0319 20:04:59.604181   43680 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0319 20:04:59.604191   43680 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0319 20:04:59.604198   43680 command_runner.go:130] > # will be added.
	I0319 20:04:59.604205   43680 command_runner.go:130] > # default_capabilities = [
	I0319 20:04:59.604214   43680 command_runner.go:130] > # 	"CHOWN",
	I0319 20:04:59.604220   43680 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0319 20:04:59.604226   43680 command_runner.go:130] > # 	"FSETID",
	I0319 20:04:59.604235   43680 command_runner.go:130] > # 	"FOWNER",
	I0319 20:04:59.604241   43680 command_runner.go:130] > # 	"SETGID",
	I0319 20:04:59.604250   43680 command_runner.go:130] > # 	"SETUID",
	I0319 20:04:59.604267   43680 command_runner.go:130] > # 	"SETPCAP",
	I0319 20:04:59.604275   43680 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0319 20:04:59.604281   43680 command_runner.go:130] > # 	"KILL",
	I0319 20:04:59.604287   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604299   43680 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0319 20:04:59.604313   43680 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0319 20:04:59.604326   43680 command_runner.go:130] > # add_inheritable_capabilities = false
	I0319 20:04:59.604337   43680 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0319 20:04:59.604348   43680 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0319 20:04:59.604357   43680 command_runner.go:130] > default_sysctls = [
	I0319 20:04:59.604378   43680 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0319 20:04:59.604387   43680 command_runner.go:130] > ]
	I0319 20:04:59.604395   43680 command_runner.go:130] > # List of devices on the host that a
	I0319 20:04:59.604408   43680 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0319 20:04:59.604417   43680 command_runner.go:130] > # allowed_devices = [
	I0319 20:04:59.604423   43680 command_runner.go:130] > # 	"/dev/fuse",
	I0319 20:04:59.604431   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604437   43680 command_runner.go:130] > # List of additional devices. specified as
	I0319 20:04:59.604450   43680 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0319 20:04:59.604462   43680 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0319 20:04:59.604473   43680 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0319 20:04:59.604482   43680 command_runner.go:130] > # additional_devices = [
	I0319 20:04:59.604487   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604497   43680 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0319 20:04:59.604506   43680 command_runner.go:130] > # cdi_spec_dirs = [
	I0319 20:04:59.604516   43680 command_runner.go:130] > # 	"/etc/cdi",
	I0319 20:04:59.604521   43680 command_runner.go:130] > # 	"/var/run/cdi",
	I0319 20:04:59.604526   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604534   43680 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0319 20:04:59.604548   43680 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0319 20:04:59.604558   43680 command_runner.go:130] > # Defaults to false.
	I0319 20:04:59.604566   43680 command_runner.go:130] > # device_ownership_from_security_context = false
	I0319 20:04:59.604579   43680 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0319 20:04:59.604591   43680 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0319 20:04:59.604601   43680 command_runner.go:130] > # hooks_dir = [
	I0319 20:04:59.604609   43680 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0319 20:04:59.604613   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604621   43680 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0319 20:04:59.604634   43680 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0319 20:04:59.604646   43680 command_runner.go:130] > # its default mounts from the following two files:
	I0319 20:04:59.604654   43680 command_runner.go:130] > #
	I0319 20:04:59.604664   43680 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0319 20:04:59.604677   43680 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0319 20:04:59.604688   43680 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0319 20:04:59.604695   43680 command_runner.go:130] > #
	I0319 20:04:59.604701   43680 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0319 20:04:59.604720   43680 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0319 20:04:59.604734   43680 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0319 20:04:59.604745   43680 command_runner.go:130] > #      only add mounts it finds in this file.
	I0319 20:04:59.604749   43680 command_runner.go:130] > #
	I0319 20:04:59.604755   43680 command_runner.go:130] > # default_mounts_file = ""
	I0319 20:04:59.604764   43680 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0319 20:04:59.604775   43680 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0319 20:04:59.604785   43680 command_runner.go:130] > pids_limit = 1024
	I0319 20:04:59.604795   43680 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0319 20:04:59.604808   43680 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0319 20:04:59.604821   43680 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0319 20:04:59.604834   43680 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0319 20:04:59.604840   43680 command_runner.go:130] > # log_size_max = -1
	I0319 20:04:59.604851   43680 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0319 20:04:59.604861   43680 command_runner.go:130] > # log_to_journald = false
	I0319 20:04:59.604877   43680 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0319 20:04:59.604888   43680 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0319 20:04:59.604900   43680 command_runner.go:130] > # Path to directory for container attach sockets.
	I0319 20:04:59.604910   43680 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0319 20:04:59.604922   43680 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0319 20:04:59.604930   43680 command_runner.go:130] > # bind_mount_prefix = ""
	I0319 20:04:59.604936   43680 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0319 20:04:59.604945   43680 command_runner.go:130] > # read_only = false
	I0319 20:04:59.604958   43680 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0319 20:04:59.604968   43680 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0319 20:04:59.604978   43680 command_runner.go:130] > # live configuration reload.
	I0319 20:04:59.604984   43680 command_runner.go:130] > # log_level = "info"
	I0319 20:04:59.604994   43680 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0319 20:04:59.605005   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.605014   43680 command_runner.go:130] > # log_filter = ""
	I0319 20:04:59.605023   43680 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0319 20:04:59.605042   43680 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0319 20:04:59.605051   43680 command_runner.go:130] > # separated by comma.
	I0319 20:04:59.605064   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605074   43680 command_runner.go:130] > # uid_mappings = ""
	I0319 20:04:59.605084   43680 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0319 20:04:59.605101   43680 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0319 20:04:59.605111   43680 command_runner.go:130] > # separated by comma.
	I0319 20:04:59.605122   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605132   43680 command_runner.go:130] > # gid_mappings = ""
	I0319 20:04:59.605142   43680 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0319 20:04:59.605156   43680 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0319 20:04:59.605166   43680 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0319 20:04:59.605181   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605190   43680 command_runner.go:130] > # minimum_mappable_uid = -1
	I0319 20:04:59.605200   43680 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0319 20:04:59.605216   43680 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0319 20:04:59.605226   43680 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0319 20:04:59.605236   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605246   43680 command_runner.go:130] > # minimum_mappable_gid = -1
	I0319 20:04:59.605261   43680 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0319 20:04:59.605273   43680 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0319 20:04:59.605285   43680 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0319 20:04:59.605295   43680 command_runner.go:130] > # ctr_stop_timeout = 30
	I0319 20:04:59.605306   43680 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0319 20:04:59.605315   43680 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0319 20:04:59.605321   43680 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0319 20:04:59.605333   43680 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0319 20:04:59.605341   43680 command_runner.go:130] > drop_infra_ctr = false
	I0319 20:04:59.605354   43680 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0319 20:04:59.605366   43680 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0319 20:04:59.605380   43680 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0319 20:04:59.605390   43680 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0319 20:04:59.605397   43680 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0319 20:04:59.605410   43680 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0319 20:04:59.605423   43680 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0319 20:04:59.605431   43680 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0319 20:04:59.605441   43680 command_runner.go:130] > # shared_cpuset = ""
	I0319 20:04:59.605450   43680 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0319 20:04:59.605461   43680 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0319 20:04:59.605471   43680 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0319 20:04:59.605485   43680 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0319 20:04:59.605497   43680 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0319 20:04:59.605507   43680 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0319 20:04:59.605533   43680 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0319 20:04:59.605544   43680 command_runner.go:130] > # enable_criu_support = false
	I0319 20:04:59.605555   43680 command_runner.go:130] > # Enable/disable the generation of the container,
	I0319 20:04:59.605567   43680 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0319 20:04:59.605575   43680 command_runner.go:130] > # enable_pod_events = false
	I0319 20:04:59.605586   43680 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0319 20:04:59.605592   43680 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0319 20:04:59.605603   43680 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0319 20:04:59.605614   43680 command_runner.go:130] > # default_runtime = "runc"
	I0319 20:04:59.605623   43680 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0319 20:04:59.605638   43680 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0319 20:04:59.605659   43680 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0319 20:04:59.605671   43680 command_runner.go:130] > # creation as a file is not desired either.
	I0319 20:04:59.605684   43680 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0319 20:04:59.605692   43680 command_runner.go:130] > # the hostname is being managed dynamically.
	I0319 20:04:59.605699   43680 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0319 20:04:59.605708   43680 command_runner.go:130] > # ]
	I0319 20:04:59.605718   43680 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0319 20:04:59.605731   43680 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0319 20:04:59.605741   43680 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0319 20:04:59.605753   43680 command_runner.go:130] > # Each entry in the table should follow the format:
	I0319 20:04:59.605758   43680 command_runner.go:130] > #
	I0319 20:04:59.605769   43680 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0319 20:04:59.605776   43680 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0319 20:04:59.605838   43680 command_runner.go:130] > # runtime_type = "oci"
	I0319 20:04:59.605851   43680 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0319 20:04:59.605859   43680 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0319 20:04:59.605865   43680 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0319 20:04:59.605893   43680 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0319 20:04:59.605903   43680 command_runner.go:130] > # monitor_env = []
	I0319 20:04:59.605911   43680 command_runner.go:130] > # privileged_without_host_devices = false
	I0319 20:04:59.605921   43680 command_runner.go:130] > # allowed_annotations = []
	I0319 20:04:59.605930   43680 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0319 20:04:59.605937   43680 command_runner.go:130] > # Where:
	I0319 20:04:59.605948   43680 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0319 20:04:59.605970   43680 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0319 20:04:59.605984   43680 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0319 20:04:59.605997   43680 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0319 20:04:59.606009   43680 command_runner.go:130] > #   in $PATH.
	I0319 20:04:59.606022   43680 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0319 20:04:59.606032   43680 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0319 20:04:59.606044   43680 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0319 20:04:59.606051   43680 command_runner.go:130] > #   state.
	I0319 20:04:59.606059   43680 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0319 20:04:59.606071   43680 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0319 20:04:59.606084   43680 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0319 20:04:59.606094   43680 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0319 20:04:59.606106   43680 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0319 20:04:59.606118   43680 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0319 20:04:59.606128   43680 command_runner.go:130] > #   The currently recognized values are:
	I0319 20:04:59.606138   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0319 20:04:59.606151   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0319 20:04:59.606160   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0319 20:04:59.606172   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0319 20:04:59.606188   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0319 20:04:59.606204   43680 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0319 20:04:59.606217   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0319 20:04:59.606231   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0319 20:04:59.606244   43680 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0319 20:04:59.606254   43680 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0319 20:04:59.606258   43680 command_runner.go:130] > #   deprecated option "conmon".
	I0319 20:04:59.606268   43680 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0319 20:04:59.606280   43680 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0319 20:04:59.606292   43680 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0319 20:04:59.606303   43680 command_runner.go:130] > #   should be moved to the container's cgroup
	I0319 20:04:59.606317   43680 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0319 20:04:59.606327   43680 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0319 20:04:59.606341   43680 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0319 20:04:59.606350   43680 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0319 20:04:59.606354   43680 command_runner.go:130] > #
	I0319 20:04:59.606366   43680 command_runner.go:130] > # Using the seccomp notifier feature:
	I0319 20:04:59.606376   43680 command_runner.go:130] > #
	I0319 20:04:59.606390   43680 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0319 20:04:59.606403   43680 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0319 20:04:59.606411   43680 command_runner.go:130] > #
	I0319 20:04:59.606420   43680 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0319 20:04:59.606432   43680 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0319 20:04:59.606439   43680 command_runner.go:130] > #
	I0319 20:04:59.606445   43680 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0319 20:04:59.606468   43680 command_runner.go:130] > # feature.
	I0319 20:04:59.606473   43680 command_runner.go:130] > #
	I0319 20:04:59.606487   43680 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0319 20:04:59.606500   43680 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0319 20:04:59.606512   43680 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0319 20:04:59.606524   43680 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0319 20:04:59.606534   43680 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0319 20:04:59.606540   43680 command_runner.go:130] > #
	I0319 20:04:59.606547   43680 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0319 20:04:59.606560   43680 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0319 20:04:59.606570   43680 command_runner.go:130] > #
	I0319 20:04:59.606580   43680 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0319 20:04:59.606591   43680 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0319 20:04:59.606599   43680 command_runner.go:130] > #
	I0319 20:04:59.606608   43680 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0319 20:04:59.606620   43680 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0319 20:04:59.606627   43680 command_runner.go:130] > # limitation.
	I0319 20:04:59.606633   43680 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0319 20:04:59.606643   43680 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0319 20:04:59.606650   43680 command_runner.go:130] > runtime_type = "oci"
	I0319 20:04:59.606660   43680 command_runner.go:130] > runtime_root = "/run/runc"
	I0319 20:04:59.606669   43680 command_runner.go:130] > runtime_config_path = ""
	I0319 20:04:59.606680   43680 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0319 20:04:59.606689   43680 command_runner.go:130] > monitor_cgroup = "pod"
	I0319 20:04:59.606696   43680 command_runner.go:130] > monitor_exec_cgroup = ""
	I0319 20:04:59.606705   43680 command_runner.go:130] > monitor_env = [
	I0319 20:04:59.606712   43680 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0319 20:04:59.606728   43680 command_runner.go:130] > ]
	I0319 20:04:59.606740   43680 command_runner.go:130] > privileged_without_host_devices = false
	I0319 20:04:59.606751   43680 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0319 20:04:59.606763   43680 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0319 20:04:59.606775   43680 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0319 20:04:59.606791   43680 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0319 20:04:59.606808   43680 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0319 20:04:59.606816   43680 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0319 20:04:59.606830   43680 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0319 20:04:59.606846   43680 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0319 20:04:59.606859   43680 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0319 20:04:59.606878   43680 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0319 20:04:59.606887   43680 command_runner.go:130] > # Example:
	I0319 20:04:59.606895   43680 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0319 20:04:59.606906   43680 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0319 20:04:59.606914   43680 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0319 20:04:59.606920   43680 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0319 20:04:59.606929   43680 command_runner.go:130] > # cpuset = 0
	I0319 20:04:59.606936   43680 command_runner.go:130] > # cpushares = "0-1"
	I0319 20:04:59.606945   43680 command_runner.go:130] > # Where:
	I0319 20:04:59.606953   43680 command_runner.go:130] > # The workload name is workload-type.
	I0319 20:04:59.606967   43680 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0319 20:04:59.606978   43680 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0319 20:04:59.606990   43680 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0319 20:04:59.607001   43680 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0319 20:04:59.607013   43680 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0319 20:04:59.607025   43680 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0319 20:04:59.607039   43680 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0319 20:04:59.607050   43680 command_runner.go:130] > # Default value is set to true
	I0319 20:04:59.607057   43680 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0319 20:04:59.607068   43680 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0319 20:04:59.607079   43680 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0319 20:04:59.607087   43680 command_runner.go:130] > # Default value is set to 'false'
	I0319 20:04:59.607095   43680 command_runner.go:130] > # disable_hostport_mapping = false
	I0319 20:04:59.607107   43680 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0319 20:04:59.607116   43680 command_runner.go:130] > #
	I0319 20:04:59.607131   43680 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0319 20:04:59.607142   43680 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0319 20:04:59.607152   43680 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0319 20:04:59.607162   43680 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0319 20:04:59.607176   43680 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0319 20:04:59.607184   43680 command_runner.go:130] > [crio.image]
	I0319 20:04:59.607191   43680 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0319 20:04:59.607201   43680 command_runner.go:130] > # default_transport = "docker://"
	I0319 20:04:59.607211   43680 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0319 20:04:59.607225   43680 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0319 20:04:59.607232   43680 command_runner.go:130] > # global_auth_file = ""
	I0319 20:04:59.607243   43680 command_runner.go:130] > # The image used to instantiate infra containers.
	I0319 20:04:59.607252   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.607263   43680 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0319 20:04:59.607275   43680 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0319 20:04:59.607286   43680 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0319 20:04:59.607294   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.607298   43680 command_runner.go:130] > # pause_image_auth_file = ""
	I0319 20:04:59.607310   43680 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0319 20:04:59.607324   43680 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0319 20:04:59.607337   43680 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0319 20:04:59.607350   43680 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0319 20:04:59.607360   43680 command_runner.go:130] > # pause_command = "/pause"
	I0319 20:04:59.607371   43680 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0319 20:04:59.607384   43680 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0319 20:04:59.607393   43680 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0319 20:04:59.607405   43680 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0319 20:04:59.607418   43680 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0319 20:04:59.607431   43680 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0319 20:04:59.607441   43680 command_runner.go:130] > # pinned_images = [
	I0319 20:04:59.607447   43680 command_runner.go:130] > # ]
	I0319 20:04:59.607459   43680 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0319 20:04:59.607471   43680 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0319 20:04:59.607483   43680 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0319 20:04:59.607491   43680 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0319 20:04:59.607500   43680 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0319 20:04:59.607519   43680 command_runner.go:130] > # signature_policy = ""
	I0319 20:04:59.607532   43680 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0319 20:04:59.607546   43680 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0319 20:04:59.607559   43680 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0319 20:04:59.607575   43680 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0319 20:04:59.607585   43680 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0319 20:04:59.607592   43680 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0319 20:04:59.607603   43680 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0319 20:04:59.607617   43680 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0319 20:04:59.607628   43680 command_runner.go:130] > # changing them here.
	I0319 20:04:59.607637   43680 command_runner.go:130] > # insecure_registries = [
	I0319 20:04:59.607646   43680 command_runner.go:130] > # ]
	I0319 20:04:59.607658   43680 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0319 20:04:59.607669   43680 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0319 20:04:59.607679   43680 command_runner.go:130] > # image_volumes = "mkdir"
	I0319 20:04:59.607687   43680 command_runner.go:130] > # Temporary directory to use for storing big files
	I0319 20:04:59.607693   43680 command_runner.go:130] > # big_files_temporary_dir = ""
	I0319 20:04:59.607707   43680 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0319 20:04:59.607717   43680 command_runner.go:130] > # CNI plugins.
	I0319 20:04:59.607723   43680 command_runner.go:130] > [crio.network]
	I0319 20:04:59.607736   43680 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0319 20:04:59.607746   43680 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0319 20:04:59.607756   43680 command_runner.go:130] > # cni_default_network = ""
	I0319 20:04:59.607768   43680 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0319 20:04:59.607778   43680 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0319 20:04:59.607786   43680 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0319 20:04:59.607793   43680 command_runner.go:130] > # plugin_dirs = [
	I0319 20:04:59.607799   43680 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0319 20:04:59.607808   43680 command_runner.go:130] > # ]
	I0319 20:04:59.607818   43680 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0319 20:04:59.607828   43680 command_runner.go:130] > [crio.metrics]
	I0319 20:04:59.607842   43680 command_runner.go:130] > # Globally enable or disable metrics support.
	I0319 20:04:59.607852   43680 command_runner.go:130] > enable_metrics = true
	I0319 20:04:59.607862   43680 command_runner.go:130] > # Specify enabled metrics collectors.
	I0319 20:04:59.607874   43680 command_runner.go:130] > # Per default all metrics are enabled.
	I0319 20:04:59.607886   43680 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0319 20:04:59.607905   43680 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0319 20:04:59.607919   43680 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0319 20:04:59.607928   43680 command_runner.go:130] > # metrics_collectors = [
	I0319 20:04:59.607936   43680 command_runner.go:130] > # 	"operations",
	I0319 20:04:59.607948   43680 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0319 20:04:59.607957   43680 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0319 20:04:59.607964   43680 command_runner.go:130] > # 	"operations_errors",
	I0319 20:04:59.607970   43680 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0319 20:04:59.607981   43680 command_runner.go:130] > # 	"image_pulls_by_name",
	I0319 20:04:59.607992   43680 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0319 20:04:59.608004   43680 command_runner.go:130] > # 	"image_pulls_failures",
	I0319 20:04:59.608014   43680 command_runner.go:130] > # 	"image_pulls_successes",
	I0319 20:04:59.608024   43680 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0319 20:04:59.608033   43680 command_runner.go:130] > # 	"image_layer_reuse",
	I0319 20:04:59.608043   43680 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0319 20:04:59.608052   43680 command_runner.go:130] > # 	"containers_oom_total",
	I0319 20:04:59.608060   43680 command_runner.go:130] > # 	"containers_oom",
	I0319 20:04:59.608064   43680 command_runner.go:130] > # 	"processes_defunct",
	I0319 20:04:59.608070   43680 command_runner.go:130] > # 	"operations_total",
	I0319 20:04:59.608080   43680 command_runner.go:130] > # 	"operations_latency_seconds",
	I0319 20:04:59.608092   43680 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0319 20:04:59.608101   43680 command_runner.go:130] > # 	"operations_errors_total",
	I0319 20:04:59.608111   43680 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0319 20:04:59.608122   43680 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0319 20:04:59.608132   43680 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0319 20:04:59.608142   43680 command_runner.go:130] > # 	"image_pulls_success_total",
	I0319 20:04:59.608151   43680 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0319 20:04:59.608159   43680 command_runner.go:130] > # 	"containers_oom_count_total",
	I0319 20:04:59.608164   43680 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0319 20:04:59.608174   43680 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0319 20:04:59.608183   43680 command_runner.go:130] > # ]
	I0319 20:04:59.608193   43680 command_runner.go:130] > # The port on which the metrics server will listen.
	I0319 20:04:59.608203   43680 command_runner.go:130] > # metrics_port = 9090
	I0319 20:04:59.608214   43680 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0319 20:04:59.608224   43680 command_runner.go:130] > # metrics_socket = ""
	I0319 20:04:59.608235   43680 command_runner.go:130] > # The certificate for the secure metrics server.
	I0319 20:04:59.608251   43680 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0319 20:04:59.608287   43680 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0319 20:04:59.608300   43680 command_runner.go:130] > # certificate on any modification event.
	I0319 20:04:59.608309   43680 command_runner.go:130] > # metrics_cert = ""
	I0319 20:04:59.608320   43680 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0319 20:04:59.608331   43680 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0319 20:04:59.608340   43680 command_runner.go:130] > # metrics_key = ""
	I0319 20:04:59.608350   43680 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0319 20:04:59.608357   43680 command_runner.go:130] > [crio.tracing]
	I0319 20:04:59.608366   43680 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0319 20:04:59.608376   43680 command_runner.go:130] > # enable_tracing = false
	I0319 20:04:59.608389   43680 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0319 20:04:59.608399   43680 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0319 20:04:59.608414   43680 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0319 20:04:59.608424   43680 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0319 20:04:59.608434   43680 command_runner.go:130] > # CRI-O NRI configuration.
	I0319 20:04:59.608443   43680 command_runner.go:130] > [crio.nri]
	I0319 20:04:59.608453   43680 command_runner.go:130] > # Globally enable or disable NRI.
	I0319 20:04:59.608460   43680 command_runner.go:130] > # enable_nri = false
	I0319 20:04:59.608468   43680 command_runner.go:130] > # NRI socket to listen on.
	I0319 20:04:59.608478   43680 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0319 20:04:59.608489   43680 command_runner.go:130] > # NRI plugin directory to use.
	I0319 20:04:59.608497   43680 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0319 20:04:59.608509   43680 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0319 20:04:59.608520   43680 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0319 20:04:59.608535   43680 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0319 20:04:59.608545   43680 command_runner.go:130] > # nri_disable_connections = false
	I0319 20:04:59.608556   43680 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0319 20:04:59.608563   43680 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0319 20:04:59.608571   43680 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0319 20:04:59.608582   43680 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0319 20:04:59.608596   43680 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0319 20:04:59.608604   43680 command_runner.go:130] > [crio.stats]
	I0319 20:04:59.608616   43680 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0319 20:04:59.608628   43680 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0319 20:04:59.608639   43680 command_runner.go:130] > # stats_collection_period = 0
	I0319 20:04:59.608849   43680 cni.go:84] Creating CNI manager for ""
	I0319 20:04:59.608871   43680 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 20:04:59.608881   43680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:04:59.608910   43680 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-695944 NodeName:multinode-695944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:04:59.609058   43680 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-695944"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:04:59.609129   43680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:04:59.620530   43680 command_runner.go:130] > kubeadm
	I0319 20:04:59.620544   43680 command_runner.go:130] > kubectl
	I0319 20:04:59.620548   43680 command_runner.go:130] > kubelet
	I0319 20:04:59.620667   43680 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:04:59.620730   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:04:59.631327   43680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0319 20:04:59.653641   43680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:04:59.674595   43680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0319 20:04:59.696037   43680 ssh_runner.go:195] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0319 20:04:59.700724   43680 command_runner.go:130] > 192.168.39.64	control-plane.minikube.internal
	I0319 20:04:59.700984   43680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:04:59.854046   43680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:04:59.870237   43680 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944 for IP: 192.168.39.64
	I0319 20:04:59.870257   43680 certs.go:194] generating shared ca certs ...
	I0319 20:04:59.870273   43680 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:04:59.870459   43680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:04:59.870514   43680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:04:59.870526   43680 certs.go:256] generating profile certs ...
	I0319 20:04:59.870611   43680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/client.key
	I0319 20:04:59.870678   43680 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.key.e90732cd
	I0319 20:04:59.870712   43680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.key
	I0319 20:04:59.870723   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 20:04:59.870739   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 20:04:59.870751   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 20:04:59.870766   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 20:04:59.870778   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 20:04:59.870791   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 20:04:59.870808   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 20:04:59.870820   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 20:04:59.870870   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:04:59.870901   43680 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:04:59.870910   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:04:59.870933   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:04:59.870955   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:04:59.870978   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:04:59.871014   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:04:59.871037   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 20:04:59.871060   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 20:04:59.871072   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:04:59.871696   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:04:59.899925   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:04:59.927035   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:04:59.953712   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:04:59.981130   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:05:00.007909   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:05:00.034255   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:05:00.061925   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 20:05:00.090580   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:05:00.117514   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:05:00.144156   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:05:00.170936   43680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:05:00.189010   43680 ssh_runner.go:195] Run: openssl version
	I0319 20:05:00.195252   43680 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0319 20:05:00.195592   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:05:00.208153   43680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.213186   43680 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.213204   43680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.213235   43680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.219283   43680 command_runner.go:130] > 3ec20f2e
	I0319 20:05:00.219340   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:05:00.228912   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:05:00.240218   43680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.245033   43680 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.245153   43680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.245193   43680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.251394   43680 command_runner.go:130] > b5213941
	I0319 20:05:00.251444   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:05:00.261510   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:05:00.273402   43680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.278573   43680 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.278769   43680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.278825   43680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.285037   43680 command_runner.go:130] > 51391683
	I0319 20:05:00.285153   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:05:00.295782   43680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:05:00.301140   43680 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:05:00.301165   43680 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0319 20:05:00.301174   43680 command_runner.go:130] > Device: 253,1	Inode: 6292486     Links: 1
	I0319 20:05:00.301185   43680 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0319 20:05:00.301197   43680 command_runner.go:130] > Access: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301205   43680 command_runner.go:130] > Modify: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301216   43680 command_runner.go:130] > Change: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301226   43680 command_runner.go:130] >  Birth: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301279   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:05:00.307708   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.307755   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:05:00.313926   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.314143   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:05:00.319987   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.320274   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:05:00.326122   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.326308   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:05:00.332342   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.332404   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:05:00.338461   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.338521   43680 kubeadm.go:391] StartCluster: {Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:05:00.338694   43680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:05:00.338740   43680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:05:00.380909   43680 command_runner.go:130] > e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc
	I0319 20:05:00.380941   43680 command_runner.go:130] > a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f
	I0319 20:05:00.380951   43680 command_runner.go:130] > b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751
	I0319 20:05:00.380960   43680 command_runner.go:130] > baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265
	I0319 20:05:00.380970   43680 command_runner.go:130] > 06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f
	I0319 20:05:00.380978   43680 command_runner.go:130] > 8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd
	I0319 20:05:00.380990   43680 command_runner.go:130] > ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4
	I0319 20:05:00.381003   43680 command_runner.go:130] > 7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a
	I0319 20:05:00.381034   43680 cri.go:89] found id: "e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc"
	I0319 20:05:00.381045   43680 cri.go:89] found id: "a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f"
	I0319 20:05:00.381051   43680 cri.go:89] found id: "b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751"
	I0319 20:05:00.381056   43680 cri.go:89] found id: "baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265"
	I0319 20:05:00.381060   43680 cri.go:89] found id: "06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f"
	I0319 20:05:00.381067   43680 cri.go:89] found id: "8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd"
	I0319 20:05:00.381073   43680 cri.go:89] found id: "ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4"
	I0319 20:05:00.381076   43680 cri.go:89] found id: "7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a"
	I0319 20:05:00.381081   43680 cri.go:89] found id: ""
	I0319 20:05:00.381149   43680 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.695322849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878791695299842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b29c5e8-4517-4a9d-ac39-fbaac4a226a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.696712114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cfd7c92-2c52-4c0a-9b85-6f764a634495 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.696803840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cfd7c92-2c52-4c0a-9b85-6f764a634495 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.697146506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cfd7c92-2c52-4c0a-9b85-6f764a634495 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.747469447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0b478c4-f917-4a92-a44e-3c04979c383f name=/runtime.v1.RuntimeService/Version
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.747541904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0b478c4-f917-4a92-a44e-3c04979c383f name=/runtime.v1.RuntimeService/Version
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.749182410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=726a2ebc-f397-4a84-ba73-d6d4ea6b875c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.749953470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878791749927520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=726a2ebc-f397-4a84-ba73-d6d4ea6b875c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.750864784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e7b0f35-e527-4c98-b0dc-f3585bf73cc6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.750968055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e7b0f35-e527-4c98-b0dc-f3585bf73cc6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.751313110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e7b0f35-e527-4c98-b0dc-f3585bf73cc6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.800903354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81b0cf37-6595-4ebe-9b98-278e35706d16 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.800988411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81b0cf37-6595-4ebe-9b98-278e35706d16 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.802006278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fe2935d-d583-4782-83ee-87936e3cd5cc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.802424528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878791802399796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fe2935d-d583-4782-83ee-87936e3cd5cc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.803028373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=532811dc-7dc6-4683-8a7e-7825d9bd8a1d name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.803082767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=532811dc-7dc6-4683-8a7e-7825d9bd8a1d name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.803521586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=532811dc-7dc6-4683-8a7e-7825d9bd8a1d name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.851005074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77e0bc19-44a6-42d7-a434-eb39c8bfc2b7 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.851116126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77e0bc19-44a6-42d7-a434-eb39c8bfc2b7 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.851967773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1d92940-7e40-4231-b9bd-349b6cb303c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.852413656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878791852390589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1d92940-7e40-4231-b9bd-349b6cb303c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.852978912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9560f925-72b8-4d49-bce6-d2afbd411407 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.853059746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9560f925-72b8-4d49-bce6-d2afbd411407 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:06:31 multinode-695944 crio[2866]: time="2024-03-19 20:06:31.853425249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9560f925-72b8-4d49-bce6-d2afbd411407 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c1e30cc4325b9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      51 seconds ago       Running             busybox                   1                   108d1fa562aee       busybox-7fdf7869d9-dlzz4
	00358166ec138       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   a56687072b8da       coredns-76f75df574-m5zqf
	bc234f70b0e0b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   0eeba5cb52be3       kindnet-w4nsf
	7bb3a2f1cbd37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   dcc5215b26eea       storage-provisioner
	18a85fa20f090       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   834998258ada3       kube-proxy-84qh5
	af4ab955538b6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   559c72a4b9336       kube-apiserver-multinode-695944
	c8e1d961e7965       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   bb44e33a50e74       kube-controller-manager-multinode-695944
	9afa717c7fcfb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   e0dbcfb5394dd       etcd-multinode-695944
	a532a9a37c276       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   98bf76a234cf0       kube-scheduler-multinode-695944
	0b2b4eaabe922       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   aadfa828b775a       busybox-7fdf7869d9-dlzz4
	e8f774cccbbfb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   87cc29b1b8a27       storage-provisioner
	a674af55049f2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   29f352c4970d9       coredns-76f75df574-m5zqf
	b28a2897f4ff9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   5a672beea4c50       kindnet-w4nsf
	baf0f1559ad90       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   ebdbcba069313       kube-proxy-84qh5
	06c74ed2873c2       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   4a82463a70f78       kube-scheduler-multinode-695944
	8e65071c13c79       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   4f74ea81616f9       kube-controller-manager-multinode-695944
	ea6d672313249       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   76225c7bbd791       etcd-multinode-695944
	7f2d48e900d9e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   e9a8a9e5729a3       kube-apiserver-multinode-695944
	
	
	==> coredns [00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42803 - 39467 "HINFO IN 2115569935030661442.1698298732371478072. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014596738s
	
	
	==> coredns [a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f] <==
	[INFO] 10.244.1.2:44345 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001755423s
	[INFO] 10.244.1.2:46902 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100479s
	[INFO] 10.244.1.2:34240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103129s
	[INFO] 10.244.1.2:44764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001238995s
	[INFO] 10.244.1.2:44078 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088083s
	[INFO] 10.244.1.2:34798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112391s
	[INFO] 10.244.1.2:39889 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102421s
	[INFO] 10.244.0.3:55428 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197803s
	[INFO] 10.244.0.3:33089 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097542s
	[INFO] 10.244.0.3:34962 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007454s
	[INFO] 10.244.0.3:55544 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075689s
	[INFO] 10.244.1.2:36294 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00028126s
	[INFO] 10.244.1.2:51905 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171211s
	[INFO] 10.244.1.2:42128 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133258s
	[INFO] 10.244.1.2:41923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000185784s
	[INFO] 10.244.0.3:46893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282187s
	[INFO] 10.244.0.3:50415 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128375s
	[INFO] 10.244.0.3:42790 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191536s
	[INFO] 10.244.0.3:55861 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088957s
	[INFO] 10.244.1.2:47241 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321839s
	[INFO] 10.244.1.2:41760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015151s
	[INFO] 10.244.1.2:53712 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144737s
	[INFO] 10.244.1.2:47973 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174147s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-695944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-695944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=multinode-695944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_58_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:58:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-695944
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:06:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:58:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:58:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:58:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    multinode-695944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3780e196839483195c87cd874aaaec3
	  System UUID:                b3780e19-6839-4831-95c8-7cd874aaaec3
	  Boot ID:                    53258622-94ab-4256-b665-5c00d785c28d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-dlzz4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 coredns-76f75df574-m5zqf                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m29s
	  kube-system                 etcd-multinode-695944                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m44s
	  kube-system                 kindnet-w4nsf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m29s
	  kube-system                 kube-apiserver-multinode-695944             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 kube-controller-manager-multinode-695944    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 kube-proxy-84qh5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-scheduler-multinode-695944             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m27s              kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m42s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m42s              kubelet          Node multinode-695944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s              kubelet          Node multinode-695944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s              kubelet          Node multinode-695944 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m42s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m30s              node-controller  Node multinode-695944 event: Registered Node multinode-695944 in Controller
	  Normal  NodeReady                7m27s              kubelet          Node multinode-695944 status is now: NodeReady
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node multinode-695944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node multinode-695944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)  kubelet          Node multinode-695944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           74s                node-controller  Node multinode-695944 event: Registered Node multinode-695944 in Controller
	
	
	Name:               multinode-695944-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-695944-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=multinode-695944
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T20_05_50_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:05:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-695944-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:05:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-695944-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 213dd72380c94515b19b4d2c8d1ecff8
	  System UUID:                213dd723-80c9-4515-b19b-4d2c8d1ecff8
	  Boot ID:                    432f3741-8a07-44e4-b952-0dd7781e43d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-xbp2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-278kv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m52s
	  kube-system                 kube-proxy-6x79z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 38s                    kube-proxy       
	  Normal  Starting                 6m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m52s (x2 over 6m52s)  kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m52s (x2 over 6m52s)  kubelet          Node multinode-695944-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m52s (x2 over 6m52s)  kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m41s                  kubelet          Node multinode-695944-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  43s (x2 over 43s)      kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x2 over 43s)      kubelet          Node multinode-695944-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x2 over 43s)      kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           39s                    node-controller  Node multinode-695944-m02 event: Registered Node multinode-695944-m02 in Controller
	  Normal  NodeReady                34s                    kubelet          Node multinode-695944-m02 status is now: NodeReady
	
	
	Name:               multinode-695944-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-695944-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=multinode-695944
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T20_06_20_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:06:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-695944-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:06:28 +0000   Tue, 19 Mar 2024 20:06:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:06:28 +0000   Tue, 19 Mar 2024 20:06:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:06:28 +0000   Tue, 19 Mar 2024 20:06:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:06:28 +0000   Tue, 19 Mar 2024 20:06:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    multinode-695944-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d66fd030935c42d8bb79fd1b61e26cef
	  System UUID:                d66fd030-935c-42d8-bb79-fd1b61e26cef
	  Boot ID:                    378dc01b-123b-4202-a99f-a6b4fa23a85d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6kvnk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-proxy-z5zqq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m                     kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m17s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m5s (x2 over 6m5s)    kubelet     Node multinode-695944-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x2 over 6m5s)    kubelet     Node multinode-695944-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x2 over 6m5s)    kubelet     Node multinode-695944-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m55s                  kubelet     Node multinode-695944-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m22s (x2 over 5m22s)  kubelet     Node multinode-695944-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m22s (x2 over 5m22s)  kubelet     Node multinode-695944-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m22s (x2 over 5m22s)  kubelet     Node multinode-695944-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m12s                  kubelet     Node multinode-695944-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-695944-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-695944-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-695944-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-695944-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.061890] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077659] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.189961] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.127415] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.289647] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +5.066014] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.060150] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.848488] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[  +0.468867] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.821224] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.086902] kauditd_printk_skb: 41 callbacks suppressed
	[Mar19 19:59] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +0.072972] kauditd_printk_skb: 21 callbacks suppressed
	[ +49.699123] kauditd_printk_skb: 82 callbacks suppressed
	[Mar19 20:04] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.138617] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.199797] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +0.149799] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.296161] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.814247] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[Mar19 20:05] systemd-fstab-generator[3077]: Ignoring "noauto" option for root device
	[  +4.672038] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.961443] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.933688] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[ +16.905761] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db] <==
	{"level":"info","ts":"2024-03-19T20:05:03.277003Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:05:03.277016Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:05:03.277293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2024-03-19T20:05:03.277385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-03-19T20:05:03.277522Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:05:03.279653Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:05:03.291017Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-19T20:05:03.29128Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dcc3547d111063c","initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-19T20:05:03.29133Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:05:03.291427Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:05:03.291461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:05:04.339159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-19T20:05:04.339276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:05:04.339315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgPreVoteResp from 7dcc3547d111063c at term 2"}
	{"level":"info","ts":"2024-03-19T20:05:04.339345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.33937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.339397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became leader at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.339427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.349222Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7dcc3547d111063c","local-member-attributes":"{Name:multinode-695944 ClientURLs:[https://192.168.39.64:2379]}","request-path":"/0/members/7dcc3547d111063c/attributes","cluster-id":"c3619ef1effce12d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:05:04.349247Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:05:04.349501Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:05:04.349549Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:05:04.349283Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:05:04.351684Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.64:2379"}
	{"level":"info","ts":"2024-03-19T20:05:04.351812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4] <==
	{"level":"warn","ts":"2024-03-19T19:59:40.434656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.794642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T19:59:40.434691Z","caller":"traceutil/trace.go:171","msg":"trace[459030470] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:452; }","duration":"232.942193ms","start":"2024-03-19T19:59:40.201739Z","end":"2024-03-19T19:59:40.434681Z","steps":["trace[459030470] 'agreement among raft nodes before linearized reading'  (duration: 232.794077ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:59:43.214021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.558398ms","expected-duration":"100ms","prefix":"","request":"header:<ID:449390572704474532 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-695944-m02\" mod_revision:476 > success:<request_put:<key:\"/registry/minions/multinode-695944-m02\" value_size:2892 >> failure:<request_range:<key:\"/registry/minions/multinode-695944-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-19T19:59:43.21437Z","caller":"traceutil/trace.go:171","msg":"trace[2060675690] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"359.334617ms","start":"2024-03-19T19:59:42.855008Z","end":"2024-03-19T19:59:43.214343Z","steps":["trace[2060675690] 'read index received'  (duration: 160.077522ms)","trace[2060675690] 'applied index is now lower than readState.Index'  (duration: 199.255582ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T19:59:43.2145Z","caller":"traceutil/trace.go:171","msg":"trace[1221424993] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"419.277568ms","start":"2024-03-19T19:59:42.795211Z","end":"2024-03-19T19:59:43.214489Z","steps":["trace[1221424993] 'process raft request'  (duration: 220.063963ms)","trace[1221424993] 'compare'  (duration: 198.261774ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T19:59:43.214695Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:59:42.795195Z","time spent":"419.35851ms","remote":"127.0.0.1:48166","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2938,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-695944-m02\" mod_revision:476 > success:<request_put:<key:\"/registry/minions/multinode-695944-m02\" value_size:2892 >> failure:<request_range:<key:\"/registry/minions/multinode-695944-m02\" > >"}
	{"level":"warn","ts":"2024-03-19T19:59:43.21486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.845338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-695944-m02\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-03-19T19:59:43.214937Z","caller":"traceutil/trace.go:171","msg":"trace[642578206] range","detail":"{range_begin:/registry/minions/multinode-695944-m02; range_end:; response_count:1; response_revision:480; }","duration":"359.942471ms","start":"2024-03-19T19:59:42.854975Z","end":"2024-03-19T19:59:43.214918Z","steps":["trace[642578206] 'agreement among raft nodes before linearized reading'  (duration: 359.852738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:59:43.214971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:59:42.854962Z","time spent":"359.999415ms","remote":"127.0.0.1:48166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2976,"request content":"key:\"/registry/minions/multinode-695944-m02\" "}
	{"level":"warn","ts":"2024-03-19T20:00:27.981018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.915593ms","expected-duration":"100ms","prefix":"","request":"header:<ID:449390572704474876 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-695944-m03.17be42dd8997384d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-695944-m03.17be42dd8997384d\" value_size:646 lease:449390572704474601 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-03-19T20:00:27.981305Z","caller":"traceutil/trace.go:171","msg":"trace[1422053545] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"188.638292ms","start":"2024-03-19T20:00:27.792648Z","end":"2024-03-19T20:00:27.981287Z","steps":["trace[1422053545] 'process raft request'  (duration: 188.585037ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:00:27.98131Z","caller":"traceutil/trace.go:171","msg":"trace[742572598] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"248.357577ms","start":"2024-03-19T20:00:27.732937Z","end":"2024-03-19T20:00:27.981294Z","steps":["trace[742572598] 'read index received'  (duration: 20.167µs)","trace[742572598] 'applied index is now lower than readState.Index'  (duration: 248.336326ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:00:27.981497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.542091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-695944-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:00:27.981633Z","caller":"traceutil/trace.go:171","msg":"trace[1283517086] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-695944-m03; range_end:; response_count:0; response_revision:578; }","duration":"248.647213ms","start":"2024-03-19T20:00:27.732916Z","end":"2024-03-19T20:00:27.981564Z","steps":["trace[1283517086] 'agreement among raft nodes before linearized reading'  (duration: 248.508903ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:00:27.981765Z","caller":"traceutil/trace.go:171","msg":"trace[607254771] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"248.374096ms","start":"2024-03-19T20:00:27.732914Z","end":"2024-03-19T20:00:27.981288Z","steps":["trace[607254771] 'process raft request'  (duration: 73.046538ms)","trace[607254771] 'compare'  (duration: 174.705789ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T20:03:26.746981Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-19T20:03:26.747148Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-695944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"]}
	{"level":"warn","ts":"2024-03-19T20:03:26.747259Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:03:26.747429Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:03:26.810421Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.64:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:03:26.810473Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.64:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T20:03:26.811929Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7dcc3547d111063c","current-leader-member-id":"7dcc3547d111063c"}
	{"level":"info","ts":"2024-03-19T20:03:26.814431Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:03:26.814629Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:03:26.814641Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-695944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"]}
	
	
	==> kernel <==
	 20:06:32 up 8 min,  0 users,  load average: 0.09, 0.14, 0.09
	Linux multinode-695944 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751] <==
	I0319 20:02:45.791842       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:02:55.805975       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:02:55.806181       1 main.go:227] handling current node
	I0319 20:02:55.806223       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:02:55.806244       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:02:55.806395       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:02:55.806416       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:03:05.816175       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:03:05.816538       1 main.go:227] handling current node
	I0319 20:03:05.816684       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:03:05.816795       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:03:05.816987       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:03:05.817049       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:03:15.830360       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:03:15.830495       1 main.go:227] handling current node
	I0319 20:03:15.830525       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:03:15.830545       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:03:15.830735       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:03:15.830772       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:03:25.841367       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:03:25.841448       1 main.go:227] handling current node
	I0319 20:03:25.841464       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:03:25.841474       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:03:25.841790       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:03:25.841842       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811] <==
	I0319 20:05:47.721409       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:05:57.734080       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:05:57.734178       1 main.go:227] handling current node
	I0319 20:05:57.734213       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:05:57.734234       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:05:57.734360       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:05:57.734381       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:06:07.748257       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:06:07.748310       1 main.go:227] handling current node
	I0319 20:06:07.748321       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:06:07.748331       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:06:07.748434       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:06:07.748439       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:06:17.756723       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:06:17.756827       1 main.go:227] handling current node
	I0319 20:06:17.756865       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:06:17.756889       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:06:17.757190       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:06:17.757232       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:06:27.763742       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:06:27.764042       1 main.go:227] handling current node
	I0319 20:06:27.764110       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:06:27.764148       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:06:27.764326       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:06:27.764369       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a] <==
	I0319 20:03:26.774291       1 controller.go:161] Shutting down OpenAPI controller
	I0319 20:03:26.774321       1 controller.go:129] Ending legacy_token_tracking_controller
	I0319 20:03:26.774344       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0319 20:03:26.774373       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0319 20:03:26.774411       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0319 20:03:26.774450       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0319 20:03:26.774486       1 establishing_controller.go:87] Shutting down EstablishingController
	I0319 20:03:26.774516       1 naming_controller.go:302] Shutting down NamingConditionController
	I0319 20:03:26.774549       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0319 20:03:26.774745       1 available_controller.go:439] Shutting down AvailableConditionController
	I0319 20:03:26.776417       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0319 20:03:26.776452       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0319 20:03:26.776489       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0319 20:03:26.777728       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 20:03:26.777798       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:03:26.777862       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0319 20:03:26.777898       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0319 20:03:26.777925       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0319 20:03:26.777955       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0319 20:03:26.778028       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0319 20:03:26.778056       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 20:03:26.781699       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0319 20:03:26.781773       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:03:26.781843       1 controller.go:159] Shutting down quota evaluator
	I0319 20:03:26.781873       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08] <==
	I0319 20:05:05.693922       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0319 20:05:05.750239       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:05:05.750360       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 20:05:05.793847       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 20:05:05.794090       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 20:05:05.794507       1 aggregator.go:165] initial CRD sync complete...
	I0319 20:05:05.794556       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 20:05:05.794646       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 20:05:05.798392       1 cache.go:39] Caches are synced for autoregister controller
	I0319 20:05:05.801892       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 20:05:05.867882       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 20:05:05.879039       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 20:05:05.881415       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 20:05:05.881965       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 20:05:05.887157       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 20:05:05.887257       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0319 20:05:05.902142       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 20:05:06.708410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 20:05:08.084061       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 20:05:08.222242       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 20:05:08.238228       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 20:05:08.311283       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 20:05:08.318667       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 20:05:18.447886       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 20:05:18.498231       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd] <==
	I0319 19:59:57.554814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="34.602µs"
	I0319 19:59:57.713188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="6.397936ms"
	I0319 19:59:57.716082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.028µs"
	I0319 20:00:27.985928       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:00:27.985995       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m03\" does not exist"
	I0319 20:00:28.020695       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z5zqq"
	I0319 20:00:28.025986       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m03" podCIDRs=["10.244.2.0/24"]
	I0319 20:00:28.026226       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6kvnk"
	I0319 20:00:32.735902       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-695944-m03"
	I0319 20:00:32.735987       1 event.go:376] "Event occurred" object="multinode-695944-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-695944-m03 event: Registered Node multinode-695944-m03 in Controller"
	I0319 20:00:37.940270       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:01:09.498892       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:01:10.511731       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m03\" does not exist"
	I0319 20:01:10.511819       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:01:10.541110       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m03" podCIDRs=["10.244.3.0/24"]
	I0319 20:01:20.257330       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:02:02.792858       1 event.go:376] "Event occurred" object="multinode-695944-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-695944-m02 status is now: NodeNotReady"
	I0319 20:02:02.793014       1 event.go:376] "Event occurred" object="multinode-695944-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-695944-m03 status is now: NodeNotReady"
	I0319 20:02:02.808821       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-qsnxk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.816940       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-z5zqq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.824967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="15.734179ms"
	I0319 20:02:02.825543       1 event.go:376] "Event occurred" object="kube-system/kindnet-278kv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.827137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.883µs"
	I0319 20:02:02.839546       1 event.go:376] "Event occurred" object="kube-system/kindnet-6kvnk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.841312       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-6x79z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1] <==
	I0319 20:05:44.912242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.077602ms"
	I0319 20:05:44.912334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.229µs"
	I0319 20:05:48.269638       1 event.go:376] "Event occurred" object="multinode-695944-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-695944-m02 event: Removing Node multinode-695944-m02 from Controller"
	I0319 20:05:48.951088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="153.947µs"
	I0319 20:05:49.365687       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m02\" does not exist"
	I0319 20:05:49.366190       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-qsnxk" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-qsnxk"
	I0319 20:05:49.382824       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m02" podCIDRs=["10.244.1.0/24"]
	I0319 20:05:51.250534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.507µs"
	I0319 20:05:51.279267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.258µs"
	I0319 20:05:51.292033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="76.866µs"
	I0319 20:05:51.313462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="111.541µs"
	I0319 20:05:51.326661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="119.581µs"
	I0319 20:05:51.329841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="35.616µs"
	I0319 20:05:53.270543       1 event.go:376] "Event occurred" object="multinode-695944-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-695944-m02 event: Registered Node multinode-695944-m02 in Controller"
	I0319 20:05:58.832157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:05:58.859023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.79µs"
	I0319 20:05:58.875014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.876µs"
	I0319 20:06:02.690814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.414986ms"
	I0319 20:06:02.690916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="33.204µs"
	I0319 20:06:03.282235       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-xbp2r" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-xbp2r"
	I0319 20:06:18.483168       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:06:19.494678       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m03\" does not exist"
	I0319 20:06:19.494761       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:06:19.519139       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m03" podCIDRs=["10.244.2.0/24"]
	I0319 20:06:28.696197       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	
	
	==> kube-proxy [18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4] <==
	I0319 20:05:07.131946       1 server_others.go:72] "Using iptables proxy"
	I0319 20:05:07.197384       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	I0319 20:05:07.320787       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:05:07.320835       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:05:07.320855       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:05:07.325197       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:05:07.325464       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:05:07.325500       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:05:07.328520       1 config.go:188] "Starting service config controller"
	I0319 20:05:07.328672       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:05:07.328740       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:05:07.328759       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:05:07.329331       1 config.go:315] "Starting node config controller"
	I0319 20:05:07.329390       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:05:07.429666       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:05:07.429735       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 20:05:07.429564       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265] <==
	I0319 19:59:04.698082       1 server_others.go:72] "Using iptables proxy"
	I0319 19:59:04.719668       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	I0319 19:59:04.854127       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:59:04.854150       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:59:04.854164       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:59:04.864763       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:59:04.866867       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:59:04.866889       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:59:04.883421       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:59:04.884741       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:59:04.884849       1 config.go:188] "Starting service config controller"
	I0319 19:59:04.884857       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:59:04.891002       1 config.go:315] "Starting node config controller"
	I0319 19:59:04.891014       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:59:04.984892       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 19:59:04.984936       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:59:04.995377       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f] <==
	W0319 19:58:46.683308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:58:46.683325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:58:47.478809       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 19:58:47.478923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 19:58:47.516213       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0319 19:58:47.516274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 19:58:47.517287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:58:47.518133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:58:47.580960       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 19:58:47.581179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 19:58:47.597931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 19:58:47.598119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 19:58:47.760700       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 19:58:47.760820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 19:58:47.777847       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:58:47.778032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:58:47.812068       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 19:58:47.812128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:58:48.101051       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:58:48.101126       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 19:58:50.850474       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:03:26.749915       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0319 20:03:26.750139       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 20:03:26.750688       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0319 20:03:26.761350       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab] <==
	I0319 20:05:03.870915       1 serving.go:380] Generated self-signed cert in-memory
	W0319 20:05:05.771343       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0319 20:05:05.771447       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:05:05.771494       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0319 20:05:05.771527       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0319 20:05:05.803320       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 20:05:05.803436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:05:05.805220       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 20:05:05.805310       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:05:05.806431       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 20:05:05.807700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 20:05:05.906510       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.089904    3084 topology_manager.go:215] "Topology Admit Handler" podUID="f9e97606-2a07-4334-9e8c-9a0acc183fb4" podNamespace="kube-system" podName="storage-provisioner"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.089982    3084 topology_manager.go:215] "Topology Admit Handler" podUID="1b2c8147-6a4d-4820-9ebe-31e7cd960267" podNamespace="default" podName="busybox-7fdf7869d9-dlzz4"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.101451    3084 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.177810    3084 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ba1f08a-8d4c-4103-a194-92e0ac532af6-cni-cfg\") pod \"kindnet-w4nsf\" (UID: \"8ba1f08a-8d4c-4103-a194-92e0ac532af6\") " pod="kube-system/kindnet-w4nsf"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.177843    3084 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ba1f08a-8d4c-4103-a194-92e0ac532af6-lib-modules\") pod \"kindnet-w4nsf\" (UID: \"8ba1f08a-8d4c-4103-a194-92e0ac532af6\") " pod="kube-system/kindnet-w4nsf"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.177860    3084 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b-xtables-lock\") pod \"kube-proxy-84qh5\" (UID: \"12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b\") " pod="kube-system/kube-proxy-84qh5"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.177878    3084 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b-lib-modules\") pod \"kube-proxy-84qh5\" (UID: \"12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b\") " pod="kube-system/kube-proxy-84qh5"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.177899    3084 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9e97606-2a07-4334-9e8c-9a0acc183fb4-tmp\") pod \"storage-provisioner\" (UID: \"f9e97606-2a07-4334-9e8c-9a0acc183fb4\") " pod="kube-system/storage-provisioner"
	Mar 19 20:05:06 multinode-695944 kubelet[3084]: I0319 20:05:06.177916    3084 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ba1f08a-8d4c-4103-a194-92e0ac532af6-xtables-lock\") pod \"kindnet-w4nsf\" (UID: \"8ba1f08a-8d4c-4103-a194-92e0ac532af6\") " pod="kube-system/kindnet-w4nsf"
	Mar 19 20:05:08 multinode-695944 kubelet[3084]: I0319 20:05:08.277197    3084 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 19 20:05:11 multinode-695944 kubelet[3084]: I0319 20:05:11.934383    3084 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.134522    3084 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:06:02 multinode-695944 kubelet[3084]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:06:02 multinode-695944 kubelet[3084]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:06:02 multinode-695944 kubelet[3084]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:06:02 multinode-695944 kubelet[3084]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.181291    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc/crio-29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1: Error finding container 29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1: Status 404 returned error can't find the container with id 29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.181974    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b/crio-ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691: Error finding container ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691: Status 404 returned error can't find the container with id ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.182719    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod813b1a2d255714d9958f607062ff9ad5/crio-4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8: Error finding container 4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8: Status 404 returned error can't find the container with id 4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.183109    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc148738974805b7fe15b2299717a2811/crio-4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8: Error finding container 4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8: Status 404 returned error can't find the container with id 4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.183695    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda21c033a65560c5069d7589a314cda60/crio-76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32: Error finding container 76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32: Status 404 returned error can't find the container with id 76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.184355    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podf9e97606-2a07-4334-9e8c-9a0acc183fb4/crio-87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3: Error finding container 87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3: Status 404 returned error can't find the container with id 87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.184907    3084 manager.go:1116] Failed to create existing container: /kubepods/pod8ba1f08a-8d4c-4103-a194-92e0ac532af6/crio-5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf: Error finding container 5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf: Status 404 returned error can't find the container with id 5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.185394    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod1b2c8147-6a4d-4820-9ebe-31e7cd960267/crio-aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5: Error finding container aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5: Status 404 returned error can't find the container with id aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5
	Mar 19 20:06:02 multinode-695944 kubelet[3084]: E0319 20:06:02.186091    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d421c74900624e16edf47e6f064a46b/crio-e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b: Error finding container e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b: Status 404 returned error can't find the container with id e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:06:31.381736   44575 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18453-10028/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-695944 -n multinode-695944
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-695944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (310.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695944 stop: exit status 82 (2m0.476871208s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-695944-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-695944 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695944 status: exit status 3 (18.651273157s)

                                                
                                                
-- stdout --
	multinode-695944
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-695944-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:08:54.708581   45125 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0319 20:08:54.708629   45125 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-695944 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-695944 -n multinode-695944
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-695944 logs -n 25: (1.635241294s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944:/home/docker/cp-test_multinode-695944-m02_multinode-695944.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944 sudo cat                                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m02_multinode-695944.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03:/home/docker/cp-test_multinode-695944-m02_multinode-695944-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944-m03 sudo cat                                   | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m02_multinode-695944-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp testdata/cp-test.txt                                                | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3232251347/001/cp-test_multinode-695944-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944:/home/docker/cp-test_multinode-695944-m03_multinode-695944.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944 sudo cat                                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m03_multinode-695944.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt                       | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m02:/home/docker/cp-test_multinode-695944-m03_multinode-695944-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n                                                                 | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | multinode-695944-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-695944 ssh -n multinode-695944-m02 sudo cat                                   | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	|         | /home/docker/cp-test_multinode-695944-m03_multinode-695944-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-695944 node stop m03                                                          | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:00 UTC |
	| node    | multinode-695944 node start                                                             | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:00 UTC | 19 Mar 24 20:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-695944                                                                | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:01 UTC |                     |
	| stop    | -p multinode-695944                                                                     | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:01 UTC |                     |
	| start   | -p multinode-695944                                                                     | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:03 UTC | 19 Mar 24 20:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-695944                                                                | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:06 UTC |                     |
	| node    | multinode-695944 node delete                                                            | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:06 UTC | 19 Mar 24 20:06 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-695944 stop                                                                   | multinode-695944 | jenkins | v1.32.0 | 19 Mar 24 20:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:03:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:03:25.886561   43680 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:03:25.886695   43680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:03:25.886706   43680 out.go:304] Setting ErrFile to fd 2...
	I0319 20:03:25.886712   43680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:03:25.886910   43680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:03:25.887434   43680 out.go:298] Setting JSON to false
	I0319 20:03:25.888293   43680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6304,"bootTime":1710872302,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:03:25.888352   43680 start.go:139] virtualization: kvm guest
	I0319 20:03:25.891252   43680 out.go:177] * [multinode-695944] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:03:25.892797   43680 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:03:25.894190   43680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:03:25.892803   43680 notify.go:220] Checking for updates...
	I0319 20:03:25.896688   43680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:03:25.898039   43680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:03:25.899376   43680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:03:25.900681   43680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:03:25.902226   43680 config.go:182] Loaded profile config "multinode-695944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:03:25.902306   43680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:03:25.902770   43680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:03:25.902831   43680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:03:25.917444   43680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0319 20:03:25.917826   43680 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:03:25.918341   43680 main.go:141] libmachine: Using API Version  1
	I0319 20:03:25.918377   43680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:03:25.918738   43680 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:03:25.918920   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:03:25.952640   43680 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:03:25.953926   43680 start.go:297] selected driver: kvm2
	I0319 20:03:25.953936   43680 start.go:901] validating driver "kvm2" against &{Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:03:25.954087   43680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:03:25.954403   43680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:03:25.954474   43680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:03:25.968121   43680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:03:25.968852   43680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:03:25.968915   43680 cni.go:84] Creating CNI manager for ""
	I0319 20:03:25.968927   43680 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 20:03:25.968977   43680 start.go:340] cluster config:
	{Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-695944 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:03:25.969087   43680 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:03:25.971494   43680 out.go:177] * Starting "multinode-695944" primary control-plane node in "multinode-695944" cluster
	I0319 20:03:25.972600   43680 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:03:25.972625   43680 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:03:25.972634   43680 cache.go:56] Caching tarball of preloaded images
	I0319 20:03:25.972714   43680 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:03:25.972728   43680 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:03:25.972839   43680 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/config.json ...
	I0319 20:03:25.973023   43680 start.go:360] acquireMachinesLock for multinode-695944: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:03:25.973062   43680 start.go:364] duration metric: took 21.995µs to acquireMachinesLock for "multinode-695944"
	I0319 20:03:25.973088   43680 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:03:25.973096   43680 fix.go:54] fixHost starting: 
	I0319 20:03:25.973336   43680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:03:25.973367   43680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:03:25.986666   43680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0319 20:03:25.987106   43680 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:03:25.987623   43680 main.go:141] libmachine: Using API Version  1
	I0319 20:03:25.987645   43680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:03:25.987938   43680 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:03:25.988100   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:03:25.988228   43680 main.go:141] libmachine: (multinode-695944) Calling .GetState
	I0319 20:03:25.989757   43680 fix.go:112] recreateIfNeeded on multinode-695944: state=Running err=<nil>
	W0319 20:03:25.989772   43680 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:03:25.991584   43680 out.go:177] * Updating the running kvm2 "multinode-695944" VM ...
	I0319 20:03:25.992737   43680 machine.go:94] provisionDockerMachine start ...
	I0319 20:03:25.992754   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:03:25.992935   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:25.995446   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:25.995895   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:25.995924   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:25.996065   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:25.996232   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:25.996433   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:25.996588   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:25.996743   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:25.996962   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:25.996979   43680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:03:26.102360   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-695944
	
	I0319 20:03:26.102382   43680 main.go:141] libmachine: (multinode-695944) Calling .GetMachineName
	I0319 20:03:26.102610   43680 buildroot.go:166] provisioning hostname "multinode-695944"
	I0319 20:03:26.102630   43680 main.go:141] libmachine: (multinode-695944) Calling .GetMachineName
	I0319 20:03:26.102815   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.105344   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.105646   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.105682   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.105834   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.106028   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.106173   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.106342   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.106494   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:26.106664   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:26.106688   43680 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-695944 && echo "multinode-695944" | sudo tee /etc/hostname
	I0319 20:03:26.232012   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-695944
	
	I0319 20:03:26.232039   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.234935   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.235315   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.235345   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.235489   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.235691   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.235840   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.235983   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.236131   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:26.236318   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:26.236336   43680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-695944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-695944/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-695944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:03:26.342162   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:03:26.342187   43680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:03:26.342229   43680 buildroot.go:174] setting up certificates
	I0319 20:03:26.342240   43680 provision.go:84] configureAuth start
	I0319 20:03:26.342252   43680 main.go:141] libmachine: (multinode-695944) Calling .GetMachineName
	I0319 20:03:26.342507   43680 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:03:26.345321   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.345726   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.345764   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.345948   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.348177   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.348503   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.348551   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.348644   43680 provision.go:143] copyHostCerts
	I0319 20:03:26.348676   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:03:26.348713   43680 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:03:26.348724   43680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:03:26.348807   43680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:03:26.348948   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:03:26.348978   43680 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:03:26.348988   43680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:03:26.349036   43680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:03:26.349121   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:03:26.349147   43680 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:03:26.349157   43680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:03:26.349193   43680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:03:26.349276   43680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.multinode-695944 san=[127.0.0.1 192.168.39.64 localhost minikube multinode-695944]
	I0319 20:03:26.420765   43680 provision.go:177] copyRemoteCerts
	I0319 20:03:26.420820   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:03:26.420841   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.423425   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.423797   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.423838   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.424024   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.424197   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.424363   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.424489   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:03:26.507309   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0319 20:03:26.507386   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:03:26.538596   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0319 20:03:26.538663   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0319 20:03:26.573064   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0319 20:03:26.573133   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:03:26.607891   43680 provision.go:87] duration metric: took 265.639005ms to configureAuth
	I0319 20:03:26.607915   43680 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:03:26.608111   43680 config.go:182] Loaded profile config "multinode-695944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:03:26.608179   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:03:26.610532   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.610945   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:03:26.610982   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:03:26.611180   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:03:26.611369   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.611525   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:03:26.611671   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:03:26.611828   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:03:26.611999   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:03:26.612013   43680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:04:57.508070   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:04:57.508101   43680 machine.go:97] duration metric: took 1m31.51535123s to provisionDockerMachine
	I0319 20:04:57.508115   43680 start.go:293] postStartSetup for "multinode-695944" (driver="kvm2")
	I0319 20:04:57.508126   43680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:04:57.508141   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.508509   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:04:57.508542   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.511636   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.512040   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.512068   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.512235   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.512446   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.512602   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.512730   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:04:57.597580   43680 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:04:57.602234   43680 command_runner.go:130] > NAME=Buildroot
	I0319 20:04:57.602259   43680 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0319 20:04:57.602265   43680 command_runner.go:130] > ID=buildroot
	I0319 20:04:57.602273   43680 command_runner.go:130] > VERSION_ID=2023.02.9
	I0319 20:04:57.602280   43680 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0319 20:04:57.602323   43680 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:04:57.602339   43680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:04:57.602406   43680 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:04:57.602486   43680 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:04:57.602495   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /etc/ssl/certs/173012.pem
	I0319 20:04:57.602572   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:04:57.613637   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:04:57.640528   43680 start.go:296] duration metric: took 132.401619ms for postStartSetup
	I0319 20:04:57.640566   43680 fix.go:56] duration metric: took 1m31.667468548s for fixHost
	I0319 20:04:57.640586   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.642936   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.643285   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.643305   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.643472   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.643679   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.643854   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.644026   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.644180   43680 main.go:141] libmachine: Using SSH client type: native
	I0319 20:04:57.644429   43680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0319 20:04:57.644441   43680 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:04:57.745664   43680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710878697.731461853
	
	I0319 20:04:57.745682   43680 fix.go:216] guest clock: 1710878697.731461853
	I0319 20:04:57.745691   43680 fix.go:229] Guest: 2024-03-19 20:04:57.731461853 +0000 UTC Remote: 2024-03-19 20:04:57.640571222 +0000 UTC m=+91.803000366 (delta=90.890631ms)
	I0319 20:04:57.745713   43680 fix.go:200] guest clock delta is within tolerance: 90.890631ms
	I0319 20:04:57.745720   43680 start.go:83] releasing machines lock for "multinode-695944", held for 1m31.772646971s
	I0319 20:04:57.745743   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.746027   43680 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:04:57.748331   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.748745   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.748763   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.748956   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.749589   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.749791   43680 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:04:57.749890   43680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:04:57.749949   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.750022   43680 ssh_runner.go:195] Run: cat /version.json
	I0319 20:04:57.750050   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:04:57.752558   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.752828   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.752910   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.752937   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.753069   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.753186   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:57.753209   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:57.753211   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.753332   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:04:57.753411   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.753482   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:04:57.753536   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:04:57.753624   43680 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:04:57.753754   43680 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:04:57.829987   43680 command_runner.go:130] > {"iso_version": "v1.32.1-1710573846-18277", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "c68f4945cc664fefa1b332c623244b57043707c8"}
	I0319 20:04:57.830331   43680 ssh_runner.go:195] Run: systemctl --version
	I0319 20:04:57.858563   43680 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0319 20:04:57.859285   43680 command_runner.go:130] > systemd 252 (252)
	I0319 20:04:57.859323   43680 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0319 20:04:57.859392   43680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:04:58.025916   43680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0319 20:04:58.032768   43680 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0319 20:04:58.032900   43680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:04:58.032973   43680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:04:58.043001   43680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 20:04:58.043018   43680 start.go:494] detecting cgroup driver to use...
	I0319 20:04:58.043102   43680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:04:58.061710   43680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:04:58.076639   43680 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:04:58.076689   43680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:04:58.091656   43680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:04:58.105954   43680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:04:58.254755   43680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:04:58.401134   43680 docker.go:233] disabling docker service ...
	I0319 20:04:58.401204   43680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:04:58.419327   43680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:04:58.434180   43680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:04:58.600032   43680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:04:58.747776   43680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:04:58.764054   43680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:04:58.786440   43680 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0319 20:04:58.787047   43680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:04:58.787109   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.798527   43680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:04:58.798580   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.809450   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.820430   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.831208   43680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:04:58.842670   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.853874   43680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.866627   43680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:04:58.878222   43680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:04:58.888969   43680 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0319 20:04:58.889025   43680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:04:58.899782   43680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:04:59.041999   43680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:04:59.307769   43680 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:04:59.307846   43680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:04:59.314127   43680 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0319 20:04:59.314148   43680 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0319 20:04:59.314154   43680 command_runner.go:130] > Device: 0,22	Inode: 1314        Links: 1
	I0319 20:04:59.314161   43680 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0319 20:04:59.314166   43680 command_runner.go:130] > Access: 2024-03-19 20:04:59.175273304 +0000
	I0319 20:04:59.314178   43680 command_runner.go:130] > Modify: 2024-03-19 20:04:59.175273304 +0000
	I0319 20:04:59.314183   43680 command_runner.go:130] > Change: 2024-03-19 20:04:59.175273304 +0000
	I0319 20:04:59.314187   43680 command_runner.go:130] >  Birth: -
	I0319 20:04:59.314423   43680 start.go:562] Will wait 60s for crictl version
	I0319 20:04:59.314491   43680 ssh_runner.go:195] Run: which crictl
	I0319 20:04:59.318799   43680 command_runner.go:130] > /usr/bin/crictl
	I0319 20:04:59.319024   43680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:04:59.362144   43680 command_runner.go:130] > Version:  0.1.0
	I0319 20:04:59.362243   43680 command_runner.go:130] > RuntimeName:  cri-o
	I0319 20:04:59.362329   43680 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0319 20:04:59.362555   43680 command_runner.go:130] > RuntimeApiVersion:  v1
	I0319 20:04:59.363955   43680 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:04:59.364023   43680 ssh_runner.go:195] Run: crio --version
	I0319 20:04:59.395902   43680 command_runner.go:130] > crio version 1.29.1
	I0319 20:04:59.395940   43680 command_runner.go:130] > Version:        1.29.1
	I0319 20:04:59.395946   43680 command_runner.go:130] > GitCommit:      unknown
	I0319 20:04:59.395950   43680 command_runner.go:130] > GitCommitDate:  unknown
	I0319 20:04:59.395954   43680 command_runner.go:130] > GitTreeState:   clean
	I0319 20:04:59.395964   43680 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0319 20:04:59.395967   43680 command_runner.go:130] > GoVersion:      go1.21.6
	I0319 20:04:59.395971   43680 command_runner.go:130] > Compiler:       gc
	I0319 20:04:59.395976   43680 command_runner.go:130] > Platform:       linux/amd64
	I0319 20:04:59.395980   43680 command_runner.go:130] > Linkmode:       dynamic
	I0319 20:04:59.395984   43680 command_runner.go:130] > BuildTags:      
	I0319 20:04:59.395989   43680 command_runner.go:130] >   containers_image_ostree_stub
	I0319 20:04:59.395993   43680 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0319 20:04:59.395997   43680 command_runner.go:130] >   btrfs_noversion
	I0319 20:04:59.396004   43680 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0319 20:04:59.396008   43680 command_runner.go:130] >   libdm_no_deferred_remove
	I0319 20:04:59.396015   43680 command_runner.go:130] >   seccomp
	I0319 20:04:59.396019   43680 command_runner.go:130] > LDFlags:          unknown
	I0319 20:04:59.396022   43680 command_runner.go:130] > SeccompEnabled:   true
	I0319 20:04:59.396026   43680 command_runner.go:130] > AppArmorEnabled:  false
	I0319 20:04:59.397180   43680 ssh_runner.go:195] Run: crio --version
	I0319 20:04:59.431558   43680 command_runner.go:130] > crio version 1.29.1
	I0319 20:04:59.431583   43680 command_runner.go:130] > Version:        1.29.1
	I0319 20:04:59.431588   43680 command_runner.go:130] > GitCommit:      unknown
	I0319 20:04:59.431593   43680 command_runner.go:130] > GitCommitDate:  unknown
	I0319 20:04:59.431596   43680 command_runner.go:130] > GitTreeState:   clean
	I0319 20:04:59.431602   43680 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0319 20:04:59.431606   43680 command_runner.go:130] > GoVersion:      go1.21.6
	I0319 20:04:59.431609   43680 command_runner.go:130] > Compiler:       gc
	I0319 20:04:59.431614   43680 command_runner.go:130] > Platform:       linux/amd64
	I0319 20:04:59.431618   43680 command_runner.go:130] > Linkmode:       dynamic
	I0319 20:04:59.431622   43680 command_runner.go:130] > BuildTags:      
	I0319 20:04:59.431627   43680 command_runner.go:130] >   containers_image_ostree_stub
	I0319 20:04:59.431631   43680 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0319 20:04:59.431634   43680 command_runner.go:130] >   btrfs_noversion
	I0319 20:04:59.431638   43680 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0319 20:04:59.431642   43680 command_runner.go:130] >   libdm_no_deferred_remove
	I0319 20:04:59.431645   43680 command_runner.go:130] >   seccomp
	I0319 20:04:59.431667   43680 command_runner.go:130] > LDFlags:          unknown
	I0319 20:04:59.431672   43680 command_runner.go:130] > SeccompEnabled:   true
	I0319 20:04:59.431676   43680 command_runner.go:130] > AppArmorEnabled:  false
	I0319 20:04:59.435589   43680 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:04:59.436851   43680 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:04:59.439250   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:59.439572   43680 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:04:59.439599   43680 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:04:59.439750   43680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:04:59.444744   43680 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0319 20:04:59.444830   43680 kubeadm.go:877] updating cluster {Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:04:59.445010   43680 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:04:59.445066   43680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:04:59.498969   43680 command_runner.go:130] > {
	I0319 20:04:59.498995   43680 command_runner.go:130] >   "images": [
	I0319 20:04:59.499002   43680 command_runner.go:130] >     {
	I0319 20:04:59.499014   43680 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0319 20:04:59.499020   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499026   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0319 20:04:59.499030   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499034   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499043   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0319 20:04:59.499050   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0319 20:04:59.499056   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499060   43680 command_runner.go:130] >       "size": "65291810",
	I0319 20:04:59.499064   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499070   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499088   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499099   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499106   43680 command_runner.go:130] >     },
	I0319 20:04:59.499114   43680 command_runner.go:130] >     {
	I0319 20:04:59.499121   43680 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0319 20:04:59.499127   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499132   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0319 20:04:59.499138   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499142   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499149   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0319 20:04:59.499174   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0319 20:04:59.499183   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499191   43680 command_runner.go:130] >       "size": "1363676",
	I0319 20:04:59.499203   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499217   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499225   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499229   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499235   43680 command_runner.go:130] >     },
	I0319 20:04:59.499239   43680 command_runner.go:130] >     {
	I0319 20:04:59.499248   43680 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0319 20:04:59.499258   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499271   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0319 20:04:59.499279   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499289   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499304   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0319 20:04:59.499318   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0319 20:04:59.499324   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499330   43680 command_runner.go:130] >       "size": "31470524",
	I0319 20:04:59.499336   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499346   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499356   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499371   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499379   43680 command_runner.go:130] >     },
	I0319 20:04:59.499385   43680 command_runner.go:130] >     {
	I0319 20:04:59.499398   43680 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0319 20:04:59.499406   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499412   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0319 20:04:59.499420   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499427   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499444   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0319 20:04:59.499466   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0319 20:04:59.499475   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499485   43680 command_runner.go:130] >       "size": "61245718",
	I0319 20:04:59.499493   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.499498   43680 command_runner.go:130] >       "username": "nonroot",
	I0319 20:04:59.499505   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499523   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499532   43680 command_runner.go:130] >     },
	I0319 20:04:59.499538   43680 command_runner.go:130] >     {
	I0319 20:04:59.499551   43680 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0319 20:04:59.499561   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499571   43680 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0319 20:04:59.499579   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499583   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499593   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0319 20:04:59.499608   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0319 20:04:59.499616   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499627   43680 command_runner.go:130] >       "size": "150779692",
	I0319 20:04:59.499637   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.499646   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.499655   43680 command_runner.go:130] >       },
	I0319 20:04:59.499662   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499669   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499683   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499693   43680 command_runner.go:130] >     },
	I0319 20:04:59.499702   43680 command_runner.go:130] >     {
	I0319 20:04:59.499714   43680 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0319 20:04:59.499724   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499735   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0319 20:04:59.499744   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499752   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499763   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0319 20:04:59.499779   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0319 20:04:59.499789   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499798   43680 command_runner.go:130] >       "size": "128508878",
	I0319 20:04:59.499807   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.499817   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.499826   43680 command_runner.go:130] >       },
	I0319 20:04:59.499835   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.499840   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.499847   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.499852   43680 command_runner.go:130] >     },
	I0319 20:04:59.499866   43680 command_runner.go:130] >     {
	I0319 20:04:59.499880   43680 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0319 20:04:59.499889   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.499901   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0319 20:04:59.499910   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499917   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.499933   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0319 20:04:59.499949   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0319 20:04:59.499958   43680 command_runner.go:130] >       ],
	I0319 20:04:59.499968   43680 command_runner.go:130] >       "size": "123142962",
	I0319 20:04:59.499977   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.499987   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.499996   43680 command_runner.go:130] >       },
	I0319 20:04:59.500003   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500012   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500019   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.500023   43680 command_runner.go:130] >     },
	I0319 20:04:59.500031   43680 command_runner.go:130] >     {
	I0319 20:04:59.500043   43680 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0319 20:04:59.500052   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.500064   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0319 20:04:59.500072   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500082   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.500104   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0319 20:04:59.500119   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0319 20:04:59.500128   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500136   43680 command_runner.go:130] >       "size": "83634073",
	I0319 20:04:59.500145   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.500154   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500161   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500168   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.500173   43680 command_runner.go:130] >     },
	I0319 20:04:59.500178   43680 command_runner.go:130] >     {
	I0319 20:04:59.500187   43680 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0319 20:04:59.500191   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.500196   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0319 20:04:59.500207   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500218   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.500233   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0319 20:04:59.500247   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0319 20:04:59.500255   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500274   43680 command_runner.go:130] >       "size": "60724018",
	I0319 20:04:59.500283   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.500293   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.500301   43680 command_runner.go:130] >       },
	I0319 20:04:59.500307   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500316   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500326   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.500331   43680 command_runner.go:130] >     },
	I0319 20:04:59.500337   43680 command_runner.go:130] >     {
	I0319 20:04:59.500346   43680 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0319 20:04:59.500355   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.500371   43680 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0319 20:04:59.500380   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500386   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.500401   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0319 20:04:59.500414   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0319 20:04:59.500422   43680 command_runner.go:130] >       ],
	I0319 20:04:59.500431   43680 command_runner.go:130] >       "size": "750414",
	I0319 20:04:59.500441   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.500451   43680 command_runner.go:130] >         "value": "65535"
	I0319 20:04:59.500460   43680 command_runner.go:130] >       },
	I0319 20:04:59.500469   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.500476   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.500485   43680 command_runner.go:130] >       "pinned": true
	I0319 20:04:59.500493   43680 command_runner.go:130] >     }
	I0319 20:04:59.500499   43680 command_runner.go:130] >   ]
	I0319 20:04:59.500506   43680 command_runner.go:130] > }
	I0319 20:04:59.500728   43680 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:04:59.500743   43680 crio.go:433] Images already preloaded, skipping extraction
	I0319 20:04:59.500789   43680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:04:59.548727   43680 command_runner.go:130] > {
	I0319 20:04:59.548748   43680 command_runner.go:130] >   "images": [
	I0319 20:04:59.548754   43680 command_runner.go:130] >     {
	I0319 20:04:59.548769   43680 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0319 20:04:59.548775   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.548784   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0319 20:04:59.548791   43680 command_runner.go:130] >       ],
	I0319 20:04:59.548798   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.548816   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0319 20:04:59.548832   43680 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0319 20:04:59.548839   43680 command_runner.go:130] >       ],
	I0319 20:04:59.548848   43680 command_runner.go:130] >       "size": "65291810",
	I0319 20:04:59.548855   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.548863   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.548886   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.548896   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.548902   43680 command_runner.go:130] >     },
	I0319 20:04:59.548909   43680 command_runner.go:130] >     {
	I0319 20:04:59.548919   43680 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0319 20:04:59.548935   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.548947   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0319 20:04:59.548953   43680 command_runner.go:130] >       ],
	I0319 20:04:59.548961   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.548974   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0319 20:04:59.548989   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0319 20:04:59.548996   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549005   43680 command_runner.go:130] >       "size": "1363676",
	I0319 20:04:59.549011   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.549024   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549043   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549054   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549061   43680 command_runner.go:130] >     },
	I0319 20:04:59.549067   43680 command_runner.go:130] >     {
	I0319 20:04:59.549077   43680 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0319 20:04:59.549087   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549096   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0319 20:04:59.549105   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549113   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549129   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0319 20:04:59.549146   43680 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0319 20:04:59.549154   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549162   43680 command_runner.go:130] >       "size": "31470524",
	I0319 20:04:59.549172   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.549181   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549190   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549197   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549206   43680 command_runner.go:130] >     },
	I0319 20:04:59.549212   43680 command_runner.go:130] >     {
	I0319 20:04:59.549226   43680 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0319 20:04:59.549235   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549245   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0319 20:04:59.549253   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549261   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549276   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0319 20:04:59.549302   43680 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0319 20:04:59.549312   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549319   43680 command_runner.go:130] >       "size": "61245718",
	I0319 20:04:59.549326   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.549337   43680 command_runner.go:130] >       "username": "nonroot",
	I0319 20:04:59.549350   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549361   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549368   43680 command_runner.go:130] >     },
	I0319 20:04:59.549376   43680 command_runner.go:130] >     {
	I0319 20:04:59.549385   43680 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0319 20:04:59.549392   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549410   43680 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0319 20:04:59.549419   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549427   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549442   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0319 20:04:59.549458   43680 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0319 20:04:59.549467   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549475   43680 command_runner.go:130] >       "size": "150779692",
	I0319 20:04:59.549484   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.549490   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.549499   43680 command_runner.go:130] >       },
	I0319 20:04:59.549505   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549513   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549523   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549531   43680 command_runner.go:130] >     },
	I0319 20:04:59.549539   43680 command_runner.go:130] >     {
	I0319 20:04:59.549551   43680 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0319 20:04:59.549560   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549569   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0319 20:04:59.549577   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549585   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549601   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0319 20:04:59.549617   43680 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0319 20:04:59.549626   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549633   43680 command_runner.go:130] >       "size": "128508878",
	I0319 20:04:59.549641   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.549648   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.549657   43680 command_runner.go:130] >       },
	I0319 20:04:59.549664   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549675   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549685   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549691   43680 command_runner.go:130] >     },
	I0319 20:04:59.549699   43680 command_runner.go:130] >     {
	I0319 20:04:59.549710   43680 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0319 20:04:59.549723   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549735   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0319 20:04:59.549742   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549758   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549775   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0319 20:04:59.549790   43680 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0319 20:04:59.549802   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549812   43680 command_runner.go:130] >       "size": "123142962",
	I0319 20:04:59.549819   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.549829   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.549838   43680 command_runner.go:130] >       },
	I0319 20:04:59.549845   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.549855   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.549863   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.549870   43680 command_runner.go:130] >     },
	I0319 20:04:59.549883   43680 command_runner.go:130] >     {
	I0319 20:04:59.549896   43680 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0319 20:04:59.549903   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.549912   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0319 20:04:59.549922   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549929   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.549960   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0319 20:04:59.549976   43680 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0319 20:04:59.549985   43680 command_runner.go:130] >       ],
	I0319 20:04:59.549992   43680 command_runner.go:130] >       "size": "83634073",
	I0319 20:04:59.550005   43680 command_runner.go:130] >       "uid": null,
	I0319 20:04:59.550015   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.550022   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.550031   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.550038   43680 command_runner.go:130] >     },
	I0319 20:04:59.550047   43680 command_runner.go:130] >     {
	I0319 20:04:59.550058   43680 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0319 20:04:59.550068   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.550077   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0319 20:04:59.550085   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550092   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.550108   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0319 20:04:59.550124   43680 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0319 20:04:59.550132   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550146   43680 command_runner.go:130] >       "size": "60724018",
	I0319 20:04:59.550156   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.550163   43680 command_runner.go:130] >         "value": "0"
	I0319 20:04:59.550171   43680 command_runner.go:130] >       },
	I0319 20:04:59.550178   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.550187   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.550196   43680 command_runner.go:130] >       "pinned": false
	I0319 20:04:59.550203   43680 command_runner.go:130] >     },
	I0319 20:04:59.550211   43680 command_runner.go:130] >     {
	I0319 20:04:59.550223   43680 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0319 20:04:59.550232   43680 command_runner.go:130] >       "repoTags": [
	I0319 20:04:59.550240   43680 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0319 20:04:59.550248   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550255   43680 command_runner.go:130] >       "repoDigests": [
	I0319 20:04:59.550270   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0319 20:04:59.550288   43680 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0319 20:04:59.550297   43680 command_runner.go:130] >       ],
	I0319 20:04:59.550305   43680 command_runner.go:130] >       "size": "750414",
	I0319 20:04:59.550315   43680 command_runner.go:130] >       "uid": {
	I0319 20:04:59.550323   43680 command_runner.go:130] >         "value": "65535"
	I0319 20:04:59.550331   43680 command_runner.go:130] >       },
	I0319 20:04:59.550338   43680 command_runner.go:130] >       "username": "",
	I0319 20:04:59.550348   43680 command_runner.go:130] >       "spec": null,
	I0319 20:04:59.550357   43680 command_runner.go:130] >       "pinned": true
	I0319 20:04:59.550366   43680 command_runner.go:130] >     }
	I0319 20:04:59.550373   43680 command_runner.go:130] >   ]
	I0319 20:04:59.550378   43680 command_runner.go:130] > }
	I0319 20:04:59.550495   43680 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:04:59.550507   43680 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:04:59.550516   43680 kubeadm.go:928] updating node { 192.168.39.64 8443 v1.29.3 crio true true} ...
	I0319 20:04:59.550648   43680 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-695944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:04:59.550722   43680 ssh_runner.go:195] Run: crio config
	I0319 20:04:59.585482   43680 command_runner.go:130] ! time="2024-03-19 20:04:59.571408956Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0319 20:04:59.592615   43680 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0319 20:04:59.602630   43680 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0319 20:04:59.602645   43680 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0319 20:04:59.602652   43680 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0319 20:04:59.602655   43680 command_runner.go:130] > #
	I0319 20:04:59.602662   43680 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0319 20:04:59.602668   43680 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0319 20:04:59.602673   43680 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0319 20:04:59.602682   43680 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0319 20:04:59.602685   43680 command_runner.go:130] > # reload'.
	I0319 20:04:59.602692   43680 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0319 20:04:59.602701   43680 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0319 20:04:59.602707   43680 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0319 20:04:59.602713   43680 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0319 20:04:59.602725   43680 command_runner.go:130] > [crio]
	I0319 20:04:59.602732   43680 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0319 20:04:59.602736   43680 command_runner.go:130] > # containers images, in this directory.
	I0319 20:04:59.602743   43680 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0319 20:04:59.602754   43680 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0319 20:04:59.602766   43680 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0319 20:04:59.602773   43680 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0319 20:04:59.602777   43680 command_runner.go:130] > # imagestore = ""
	I0319 20:04:59.602785   43680 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0319 20:04:59.602795   43680 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0319 20:04:59.602801   43680 command_runner.go:130] > storage_driver = "overlay"
	I0319 20:04:59.602810   43680 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0319 20:04:59.602819   43680 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0319 20:04:59.602839   43680 command_runner.go:130] > storage_option = [
	I0319 20:04:59.602844   43680 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0319 20:04:59.602847   43680 command_runner.go:130] > ]
	I0319 20:04:59.602854   43680 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0319 20:04:59.602863   43680 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0319 20:04:59.602876   43680 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0319 20:04:59.602888   43680 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0319 20:04:59.602899   43680 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0319 20:04:59.602909   43680 command_runner.go:130] > # always happen on a node reboot
	I0319 20:04:59.602917   43680 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0319 20:04:59.602941   43680 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0319 20:04:59.602952   43680 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0319 20:04:59.602960   43680 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0319 20:04:59.602965   43680 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0319 20:04:59.602974   43680 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0319 20:04:59.602985   43680 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0319 20:04:59.602989   43680 command_runner.go:130] > # internal_wipe = true
	I0319 20:04:59.603001   43680 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0319 20:04:59.603015   43680 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0319 20:04:59.603023   43680 command_runner.go:130] > # internal_repair = false
	I0319 20:04:59.603035   43680 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0319 20:04:59.603047   43680 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0319 20:04:59.603058   43680 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0319 20:04:59.603070   43680 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0319 20:04:59.603082   43680 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0319 20:04:59.603088   43680 command_runner.go:130] > [crio.api]
	I0319 20:04:59.603093   43680 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0319 20:04:59.603102   43680 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0319 20:04:59.603114   43680 command_runner.go:130] > # IP address on which the stream server will listen.
	I0319 20:04:59.603125   43680 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0319 20:04:59.603138   43680 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0319 20:04:59.603149   43680 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0319 20:04:59.603159   43680 command_runner.go:130] > # stream_port = "0"
	I0319 20:04:59.603167   43680 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0319 20:04:59.603175   43680 command_runner.go:130] > # stream_enable_tls = false
	I0319 20:04:59.603181   43680 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0319 20:04:59.603197   43680 command_runner.go:130] > # stream_idle_timeout = ""
	I0319 20:04:59.603221   43680 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0319 20:04:59.603235   43680 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0319 20:04:59.603244   43680 command_runner.go:130] > # minutes.
	I0319 20:04:59.603251   43680 command_runner.go:130] > # stream_tls_cert = ""
	I0319 20:04:59.603263   43680 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0319 20:04:59.603275   43680 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0319 20:04:59.603283   43680 command_runner.go:130] > # stream_tls_key = ""
	I0319 20:04:59.603289   43680 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0319 20:04:59.603302   43680 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0319 20:04:59.603333   43680 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0319 20:04:59.603343   43680 command_runner.go:130] > # stream_tls_ca = ""
	I0319 20:04:59.603355   43680 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0319 20:04:59.603364   43680 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0319 20:04:59.603377   43680 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0319 20:04:59.603384   43680 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0319 20:04:59.603391   43680 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0319 20:04:59.603402   43680 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0319 20:04:59.603411   43680 command_runner.go:130] > [crio.runtime]
	I0319 20:04:59.603426   43680 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0319 20:04:59.603438   43680 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0319 20:04:59.603447   43680 command_runner.go:130] > # "nofile=1024:2048"
	I0319 20:04:59.603458   43680 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0319 20:04:59.603467   43680 command_runner.go:130] > # default_ulimits = [
	I0319 20:04:59.603472   43680 command_runner.go:130] > # ]
	I0319 20:04:59.603483   43680 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0319 20:04:59.603487   43680 command_runner.go:130] > # no_pivot = false
	I0319 20:04:59.603502   43680 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0319 20:04:59.603516   43680 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0319 20:04:59.603524   43680 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0319 20:04:59.603537   43680 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0319 20:04:59.603548   43680 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0319 20:04:59.603562   43680 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0319 20:04:59.603572   43680 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0319 20:04:59.603582   43680 command_runner.go:130] > # Cgroup setting for conmon
	I0319 20:04:59.603588   43680 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0319 20:04:59.603603   43680 command_runner.go:130] > conmon_cgroup = "pod"
	I0319 20:04:59.603617   43680 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0319 20:04:59.603626   43680 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0319 20:04:59.603639   43680 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0319 20:04:59.603648   43680 command_runner.go:130] > conmon_env = [
	I0319 20:04:59.603658   43680 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0319 20:04:59.603666   43680 command_runner.go:130] > ]
	I0319 20:04:59.603675   43680 command_runner.go:130] > # Additional environment variables to set for all the
	I0319 20:04:59.603684   43680 command_runner.go:130] > # containers. These are overridden if set in the
	I0319 20:04:59.603690   43680 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0319 20:04:59.603698   43680 command_runner.go:130] > # default_env = [
	I0319 20:04:59.603703   43680 command_runner.go:130] > # ]
	I0319 20:04:59.603717   43680 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0319 20:04:59.603729   43680 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0319 20:04:59.603738   43680 command_runner.go:130] > # selinux = false
	I0319 20:04:59.603748   43680 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0319 20:04:59.603761   43680 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0319 20:04:59.603772   43680 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0319 20:04:59.603781   43680 command_runner.go:130] > # seccomp_profile = ""
	I0319 20:04:59.603787   43680 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0319 20:04:59.603798   43680 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0319 20:04:59.603811   43680 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0319 20:04:59.603820   43680 command_runner.go:130] > # which might increase security.
	I0319 20:04:59.603834   43680 command_runner.go:130] > # This option is currently deprecated,
	I0319 20:04:59.603845   43680 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0319 20:04:59.603856   43680 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0319 20:04:59.603871   43680 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0319 20:04:59.603882   43680 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0319 20:04:59.603891   43680 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0319 20:04:59.603904   43680 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0319 20:04:59.603916   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.603924   43680 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0319 20:04:59.603937   43680 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0319 20:04:59.603947   43680 command_runner.go:130] > # the cgroup blockio controller.
	I0319 20:04:59.603955   43680 command_runner.go:130] > # blockio_config_file = ""
	I0319 20:04:59.603968   43680 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0319 20:04:59.603983   43680 command_runner.go:130] > # blockio parameters.
	I0319 20:04:59.603991   43680 command_runner.go:130] > # blockio_reload = false
	I0319 20:04:59.604001   43680 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0319 20:04:59.604011   43680 command_runner.go:130] > # irqbalance daemon.
	I0319 20:04:59.604020   43680 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0319 20:04:59.604034   43680 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0319 20:04:59.604048   43680 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0319 20:04:59.604061   43680 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0319 20:04:59.604073   43680 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0319 20:04:59.604083   43680 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0319 20:04:59.604088   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.604101   43680 command_runner.go:130] > # rdt_config_file = ""
	I0319 20:04:59.604113   43680 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0319 20:04:59.604121   43680 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0319 20:04:59.604160   43680 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0319 20:04:59.604171   43680 command_runner.go:130] > # separate_pull_cgroup = ""
	I0319 20:04:59.604181   43680 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0319 20:04:59.604191   43680 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0319 20:04:59.604198   43680 command_runner.go:130] > # will be added.
	I0319 20:04:59.604205   43680 command_runner.go:130] > # default_capabilities = [
	I0319 20:04:59.604214   43680 command_runner.go:130] > # 	"CHOWN",
	I0319 20:04:59.604220   43680 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0319 20:04:59.604226   43680 command_runner.go:130] > # 	"FSETID",
	I0319 20:04:59.604235   43680 command_runner.go:130] > # 	"FOWNER",
	I0319 20:04:59.604241   43680 command_runner.go:130] > # 	"SETGID",
	I0319 20:04:59.604250   43680 command_runner.go:130] > # 	"SETUID",
	I0319 20:04:59.604267   43680 command_runner.go:130] > # 	"SETPCAP",
	I0319 20:04:59.604275   43680 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0319 20:04:59.604281   43680 command_runner.go:130] > # 	"KILL",
	I0319 20:04:59.604287   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604299   43680 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0319 20:04:59.604313   43680 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0319 20:04:59.604326   43680 command_runner.go:130] > # add_inheritable_capabilities = false
	I0319 20:04:59.604337   43680 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0319 20:04:59.604348   43680 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0319 20:04:59.604357   43680 command_runner.go:130] > default_sysctls = [
	I0319 20:04:59.604378   43680 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0319 20:04:59.604387   43680 command_runner.go:130] > ]
	I0319 20:04:59.604395   43680 command_runner.go:130] > # List of devices on the host that a
	I0319 20:04:59.604408   43680 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0319 20:04:59.604417   43680 command_runner.go:130] > # allowed_devices = [
	I0319 20:04:59.604423   43680 command_runner.go:130] > # 	"/dev/fuse",
	I0319 20:04:59.604431   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604437   43680 command_runner.go:130] > # List of additional devices. specified as
	I0319 20:04:59.604450   43680 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0319 20:04:59.604462   43680 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0319 20:04:59.604473   43680 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0319 20:04:59.604482   43680 command_runner.go:130] > # additional_devices = [
	I0319 20:04:59.604487   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604497   43680 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0319 20:04:59.604506   43680 command_runner.go:130] > # cdi_spec_dirs = [
	I0319 20:04:59.604516   43680 command_runner.go:130] > # 	"/etc/cdi",
	I0319 20:04:59.604521   43680 command_runner.go:130] > # 	"/var/run/cdi",
	I0319 20:04:59.604526   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604534   43680 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0319 20:04:59.604548   43680 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0319 20:04:59.604558   43680 command_runner.go:130] > # Defaults to false.
	I0319 20:04:59.604566   43680 command_runner.go:130] > # device_ownership_from_security_context = false
	I0319 20:04:59.604579   43680 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0319 20:04:59.604591   43680 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0319 20:04:59.604601   43680 command_runner.go:130] > # hooks_dir = [
	I0319 20:04:59.604609   43680 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0319 20:04:59.604613   43680 command_runner.go:130] > # ]
	I0319 20:04:59.604621   43680 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0319 20:04:59.604634   43680 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0319 20:04:59.604646   43680 command_runner.go:130] > # its default mounts from the following two files:
	I0319 20:04:59.604654   43680 command_runner.go:130] > #
	I0319 20:04:59.604664   43680 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0319 20:04:59.604677   43680 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0319 20:04:59.604688   43680 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0319 20:04:59.604695   43680 command_runner.go:130] > #
	I0319 20:04:59.604701   43680 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0319 20:04:59.604720   43680 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0319 20:04:59.604734   43680 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0319 20:04:59.604745   43680 command_runner.go:130] > #      only add mounts it finds in this file.
	I0319 20:04:59.604749   43680 command_runner.go:130] > #
	I0319 20:04:59.604755   43680 command_runner.go:130] > # default_mounts_file = ""
	I0319 20:04:59.604764   43680 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0319 20:04:59.604775   43680 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0319 20:04:59.604785   43680 command_runner.go:130] > pids_limit = 1024
	I0319 20:04:59.604795   43680 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0319 20:04:59.604808   43680 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0319 20:04:59.604821   43680 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0319 20:04:59.604834   43680 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0319 20:04:59.604840   43680 command_runner.go:130] > # log_size_max = -1
	I0319 20:04:59.604851   43680 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0319 20:04:59.604861   43680 command_runner.go:130] > # log_to_journald = false
	I0319 20:04:59.604877   43680 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0319 20:04:59.604888   43680 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0319 20:04:59.604900   43680 command_runner.go:130] > # Path to directory for container attach sockets.
	I0319 20:04:59.604910   43680 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0319 20:04:59.604922   43680 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0319 20:04:59.604930   43680 command_runner.go:130] > # bind_mount_prefix = ""
	I0319 20:04:59.604936   43680 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0319 20:04:59.604945   43680 command_runner.go:130] > # read_only = false
	I0319 20:04:59.604958   43680 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0319 20:04:59.604968   43680 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0319 20:04:59.604978   43680 command_runner.go:130] > # live configuration reload.
	I0319 20:04:59.604984   43680 command_runner.go:130] > # log_level = "info"
	I0319 20:04:59.604994   43680 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0319 20:04:59.605005   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.605014   43680 command_runner.go:130] > # log_filter = ""
	I0319 20:04:59.605023   43680 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0319 20:04:59.605042   43680 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0319 20:04:59.605051   43680 command_runner.go:130] > # separated by comma.
	I0319 20:04:59.605064   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605074   43680 command_runner.go:130] > # uid_mappings = ""
	I0319 20:04:59.605084   43680 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0319 20:04:59.605101   43680 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0319 20:04:59.605111   43680 command_runner.go:130] > # separated by comma.
	I0319 20:04:59.605122   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605132   43680 command_runner.go:130] > # gid_mappings = ""
	I0319 20:04:59.605142   43680 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0319 20:04:59.605156   43680 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0319 20:04:59.605166   43680 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0319 20:04:59.605181   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605190   43680 command_runner.go:130] > # minimum_mappable_uid = -1
	I0319 20:04:59.605200   43680 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0319 20:04:59.605216   43680 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0319 20:04:59.605226   43680 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0319 20:04:59.605236   43680 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0319 20:04:59.605246   43680 command_runner.go:130] > # minimum_mappable_gid = -1
	I0319 20:04:59.605261   43680 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0319 20:04:59.605273   43680 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0319 20:04:59.605285   43680 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0319 20:04:59.605295   43680 command_runner.go:130] > # ctr_stop_timeout = 30
	I0319 20:04:59.605306   43680 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0319 20:04:59.605315   43680 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0319 20:04:59.605321   43680 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0319 20:04:59.605333   43680 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0319 20:04:59.605341   43680 command_runner.go:130] > drop_infra_ctr = false
	I0319 20:04:59.605354   43680 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0319 20:04:59.605366   43680 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0319 20:04:59.605380   43680 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0319 20:04:59.605390   43680 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0319 20:04:59.605397   43680 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0319 20:04:59.605410   43680 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0319 20:04:59.605423   43680 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0319 20:04:59.605431   43680 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0319 20:04:59.605441   43680 command_runner.go:130] > # shared_cpuset = ""
	I0319 20:04:59.605450   43680 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0319 20:04:59.605461   43680 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0319 20:04:59.605471   43680 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0319 20:04:59.605485   43680 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0319 20:04:59.605497   43680 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0319 20:04:59.605507   43680 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0319 20:04:59.605533   43680 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0319 20:04:59.605544   43680 command_runner.go:130] > # enable_criu_support = false
	I0319 20:04:59.605555   43680 command_runner.go:130] > # Enable/disable the generation of the container,
	I0319 20:04:59.605567   43680 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0319 20:04:59.605575   43680 command_runner.go:130] > # enable_pod_events = false
	I0319 20:04:59.605586   43680 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0319 20:04:59.605592   43680 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0319 20:04:59.605603   43680 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0319 20:04:59.605614   43680 command_runner.go:130] > # default_runtime = "runc"
	I0319 20:04:59.605623   43680 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0319 20:04:59.605638   43680 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0319 20:04:59.605659   43680 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0319 20:04:59.605671   43680 command_runner.go:130] > # creation as a file is not desired either.
	I0319 20:04:59.605684   43680 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0319 20:04:59.605692   43680 command_runner.go:130] > # the hostname is being managed dynamically.
	I0319 20:04:59.605699   43680 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0319 20:04:59.605708   43680 command_runner.go:130] > # ]
	I0319 20:04:59.605718   43680 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0319 20:04:59.605731   43680 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0319 20:04:59.605741   43680 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0319 20:04:59.605753   43680 command_runner.go:130] > # Each entry in the table should follow the format:
	I0319 20:04:59.605758   43680 command_runner.go:130] > #
	I0319 20:04:59.605769   43680 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0319 20:04:59.605776   43680 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0319 20:04:59.605838   43680 command_runner.go:130] > # runtime_type = "oci"
	I0319 20:04:59.605851   43680 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0319 20:04:59.605859   43680 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0319 20:04:59.605865   43680 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0319 20:04:59.605893   43680 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0319 20:04:59.605903   43680 command_runner.go:130] > # monitor_env = []
	I0319 20:04:59.605911   43680 command_runner.go:130] > # privileged_without_host_devices = false
	I0319 20:04:59.605921   43680 command_runner.go:130] > # allowed_annotations = []
	I0319 20:04:59.605930   43680 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0319 20:04:59.605937   43680 command_runner.go:130] > # Where:
	I0319 20:04:59.605948   43680 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0319 20:04:59.605970   43680 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0319 20:04:59.605984   43680 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0319 20:04:59.605997   43680 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0319 20:04:59.606009   43680 command_runner.go:130] > #   in $PATH.
	I0319 20:04:59.606022   43680 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0319 20:04:59.606032   43680 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0319 20:04:59.606044   43680 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0319 20:04:59.606051   43680 command_runner.go:130] > #   state.
	I0319 20:04:59.606059   43680 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0319 20:04:59.606071   43680 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0319 20:04:59.606084   43680 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0319 20:04:59.606094   43680 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0319 20:04:59.606106   43680 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0319 20:04:59.606118   43680 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0319 20:04:59.606128   43680 command_runner.go:130] > #   The currently recognized values are:
	I0319 20:04:59.606138   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0319 20:04:59.606151   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0319 20:04:59.606160   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0319 20:04:59.606172   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0319 20:04:59.606188   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0319 20:04:59.606204   43680 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0319 20:04:59.606217   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0319 20:04:59.606231   43680 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0319 20:04:59.606244   43680 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0319 20:04:59.606254   43680 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0319 20:04:59.606258   43680 command_runner.go:130] > #   deprecated option "conmon".
	I0319 20:04:59.606268   43680 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0319 20:04:59.606280   43680 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0319 20:04:59.606292   43680 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0319 20:04:59.606303   43680 command_runner.go:130] > #   should be moved to the container's cgroup
	I0319 20:04:59.606317   43680 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0319 20:04:59.606327   43680 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0319 20:04:59.606341   43680 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0319 20:04:59.606350   43680 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0319 20:04:59.606354   43680 command_runner.go:130] > #
	I0319 20:04:59.606366   43680 command_runner.go:130] > # Using the seccomp notifier feature:
	I0319 20:04:59.606376   43680 command_runner.go:130] > #
	I0319 20:04:59.606390   43680 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0319 20:04:59.606403   43680 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0319 20:04:59.606411   43680 command_runner.go:130] > #
	I0319 20:04:59.606420   43680 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0319 20:04:59.606432   43680 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0319 20:04:59.606439   43680 command_runner.go:130] > #
	I0319 20:04:59.606445   43680 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0319 20:04:59.606468   43680 command_runner.go:130] > # feature.
	I0319 20:04:59.606473   43680 command_runner.go:130] > #
	I0319 20:04:59.606487   43680 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0319 20:04:59.606500   43680 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0319 20:04:59.606512   43680 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0319 20:04:59.606524   43680 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0319 20:04:59.606534   43680 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0319 20:04:59.606540   43680 command_runner.go:130] > #
	I0319 20:04:59.606547   43680 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0319 20:04:59.606560   43680 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0319 20:04:59.606570   43680 command_runner.go:130] > #
	I0319 20:04:59.606580   43680 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0319 20:04:59.606591   43680 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0319 20:04:59.606599   43680 command_runner.go:130] > #
	I0319 20:04:59.606608   43680 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0319 20:04:59.606620   43680 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0319 20:04:59.606627   43680 command_runner.go:130] > # limitation.
	I0319 20:04:59.606633   43680 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0319 20:04:59.606643   43680 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0319 20:04:59.606650   43680 command_runner.go:130] > runtime_type = "oci"
	I0319 20:04:59.606660   43680 command_runner.go:130] > runtime_root = "/run/runc"
	I0319 20:04:59.606669   43680 command_runner.go:130] > runtime_config_path = ""
	I0319 20:04:59.606680   43680 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0319 20:04:59.606689   43680 command_runner.go:130] > monitor_cgroup = "pod"
	I0319 20:04:59.606696   43680 command_runner.go:130] > monitor_exec_cgroup = ""
	I0319 20:04:59.606705   43680 command_runner.go:130] > monitor_env = [
	I0319 20:04:59.606712   43680 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0319 20:04:59.606728   43680 command_runner.go:130] > ]
	I0319 20:04:59.606740   43680 command_runner.go:130] > privileged_without_host_devices = false
	I0319 20:04:59.606751   43680 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0319 20:04:59.606763   43680 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0319 20:04:59.606775   43680 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0319 20:04:59.606791   43680 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0319 20:04:59.606808   43680 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0319 20:04:59.606816   43680 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0319 20:04:59.606830   43680 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0319 20:04:59.606846   43680 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0319 20:04:59.606859   43680 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0319 20:04:59.606878   43680 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0319 20:04:59.606887   43680 command_runner.go:130] > # Example:
	I0319 20:04:59.606895   43680 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0319 20:04:59.606906   43680 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0319 20:04:59.606914   43680 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0319 20:04:59.606920   43680 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0319 20:04:59.606929   43680 command_runner.go:130] > # cpuset = 0
	I0319 20:04:59.606936   43680 command_runner.go:130] > # cpushares = "0-1"
	I0319 20:04:59.606945   43680 command_runner.go:130] > # Where:
	I0319 20:04:59.606953   43680 command_runner.go:130] > # The workload name is workload-type.
	I0319 20:04:59.606967   43680 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0319 20:04:59.606978   43680 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0319 20:04:59.606990   43680 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0319 20:04:59.607001   43680 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0319 20:04:59.607013   43680 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0319 20:04:59.607025   43680 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0319 20:04:59.607039   43680 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0319 20:04:59.607050   43680 command_runner.go:130] > # Default value is set to true
	I0319 20:04:59.607057   43680 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0319 20:04:59.607068   43680 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0319 20:04:59.607079   43680 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0319 20:04:59.607087   43680 command_runner.go:130] > # Default value is set to 'false'
	I0319 20:04:59.607095   43680 command_runner.go:130] > # disable_hostport_mapping = false
	I0319 20:04:59.607107   43680 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0319 20:04:59.607116   43680 command_runner.go:130] > #
	I0319 20:04:59.607131   43680 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0319 20:04:59.607142   43680 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0319 20:04:59.607152   43680 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0319 20:04:59.607162   43680 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0319 20:04:59.607176   43680 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0319 20:04:59.607184   43680 command_runner.go:130] > [crio.image]
	I0319 20:04:59.607191   43680 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0319 20:04:59.607201   43680 command_runner.go:130] > # default_transport = "docker://"
	I0319 20:04:59.607211   43680 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0319 20:04:59.607225   43680 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0319 20:04:59.607232   43680 command_runner.go:130] > # global_auth_file = ""
	I0319 20:04:59.607243   43680 command_runner.go:130] > # The image used to instantiate infra containers.
	I0319 20:04:59.607252   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.607263   43680 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0319 20:04:59.607275   43680 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0319 20:04:59.607286   43680 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0319 20:04:59.607294   43680 command_runner.go:130] > # This option supports live configuration reload.
	I0319 20:04:59.607298   43680 command_runner.go:130] > # pause_image_auth_file = ""
	I0319 20:04:59.607310   43680 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0319 20:04:59.607324   43680 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0319 20:04:59.607337   43680 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0319 20:04:59.607350   43680 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0319 20:04:59.607360   43680 command_runner.go:130] > # pause_command = "/pause"
	I0319 20:04:59.607371   43680 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0319 20:04:59.607384   43680 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0319 20:04:59.607393   43680 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0319 20:04:59.607405   43680 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0319 20:04:59.607418   43680 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0319 20:04:59.607431   43680 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0319 20:04:59.607441   43680 command_runner.go:130] > # pinned_images = [
	I0319 20:04:59.607447   43680 command_runner.go:130] > # ]
	I0319 20:04:59.607459   43680 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0319 20:04:59.607471   43680 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0319 20:04:59.607483   43680 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0319 20:04:59.607491   43680 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0319 20:04:59.607500   43680 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0319 20:04:59.607519   43680 command_runner.go:130] > # signature_policy = ""
	I0319 20:04:59.607532   43680 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0319 20:04:59.607546   43680 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0319 20:04:59.607559   43680 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0319 20:04:59.607575   43680 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0319 20:04:59.607585   43680 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0319 20:04:59.607592   43680 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0319 20:04:59.607603   43680 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0319 20:04:59.607617   43680 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0319 20:04:59.607628   43680 command_runner.go:130] > # changing them here.
	I0319 20:04:59.607637   43680 command_runner.go:130] > # insecure_registries = [
	I0319 20:04:59.607646   43680 command_runner.go:130] > # ]
	I0319 20:04:59.607658   43680 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0319 20:04:59.607669   43680 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0319 20:04:59.607679   43680 command_runner.go:130] > # image_volumes = "mkdir"
	I0319 20:04:59.607687   43680 command_runner.go:130] > # Temporary directory to use for storing big files
	I0319 20:04:59.607693   43680 command_runner.go:130] > # big_files_temporary_dir = ""
	I0319 20:04:59.607707   43680 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0319 20:04:59.607717   43680 command_runner.go:130] > # CNI plugins.
	I0319 20:04:59.607723   43680 command_runner.go:130] > [crio.network]
	I0319 20:04:59.607736   43680 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0319 20:04:59.607746   43680 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0319 20:04:59.607756   43680 command_runner.go:130] > # cni_default_network = ""
	I0319 20:04:59.607768   43680 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0319 20:04:59.607778   43680 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0319 20:04:59.607786   43680 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0319 20:04:59.607793   43680 command_runner.go:130] > # plugin_dirs = [
	I0319 20:04:59.607799   43680 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0319 20:04:59.607808   43680 command_runner.go:130] > # ]
	I0319 20:04:59.607818   43680 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0319 20:04:59.607828   43680 command_runner.go:130] > [crio.metrics]
	I0319 20:04:59.607842   43680 command_runner.go:130] > # Globally enable or disable metrics support.
	I0319 20:04:59.607852   43680 command_runner.go:130] > enable_metrics = true
	I0319 20:04:59.607862   43680 command_runner.go:130] > # Specify enabled metrics collectors.
	I0319 20:04:59.607874   43680 command_runner.go:130] > # Per default all metrics are enabled.
	I0319 20:04:59.607886   43680 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0319 20:04:59.607905   43680 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0319 20:04:59.607919   43680 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0319 20:04:59.607928   43680 command_runner.go:130] > # metrics_collectors = [
	I0319 20:04:59.607936   43680 command_runner.go:130] > # 	"operations",
	I0319 20:04:59.607948   43680 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0319 20:04:59.607957   43680 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0319 20:04:59.607964   43680 command_runner.go:130] > # 	"operations_errors",
	I0319 20:04:59.607970   43680 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0319 20:04:59.607981   43680 command_runner.go:130] > # 	"image_pulls_by_name",
	I0319 20:04:59.607992   43680 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0319 20:04:59.608004   43680 command_runner.go:130] > # 	"image_pulls_failures",
	I0319 20:04:59.608014   43680 command_runner.go:130] > # 	"image_pulls_successes",
	I0319 20:04:59.608024   43680 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0319 20:04:59.608033   43680 command_runner.go:130] > # 	"image_layer_reuse",
	I0319 20:04:59.608043   43680 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0319 20:04:59.608052   43680 command_runner.go:130] > # 	"containers_oom_total",
	I0319 20:04:59.608060   43680 command_runner.go:130] > # 	"containers_oom",
	I0319 20:04:59.608064   43680 command_runner.go:130] > # 	"processes_defunct",
	I0319 20:04:59.608070   43680 command_runner.go:130] > # 	"operations_total",
	I0319 20:04:59.608080   43680 command_runner.go:130] > # 	"operations_latency_seconds",
	I0319 20:04:59.608092   43680 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0319 20:04:59.608101   43680 command_runner.go:130] > # 	"operations_errors_total",
	I0319 20:04:59.608111   43680 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0319 20:04:59.608122   43680 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0319 20:04:59.608132   43680 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0319 20:04:59.608142   43680 command_runner.go:130] > # 	"image_pulls_success_total",
	I0319 20:04:59.608151   43680 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0319 20:04:59.608159   43680 command_runner.go:130] > # 	"containers_oom_count_total",
	I0319 20:04:59.608164   43680 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0319 20:04:59.608174   43680 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0319 20:04:59.608183   43680 command_runner.go:130] > # ]
	I0319 20:04:59.608193   43680 command_runner.go:130] > # The port on which the metrics server will listen.
	I0319 20:04:59.608203   43680 command_runner.go:130] > # metrics_port = 9090
	I0319 20:04:59.608214   43680 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0319 20:04:59.608224   43680 command_runner.go:130] > # metrics_socket = ""
	I0319 20:04:59.608235   43680 command_runner.go:130] > # The certificate for the secure metrics server.
	I0319 20:04:59.608251   43680 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0319 20:04:59.608287   43680 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0319 20:04:59.608300   43680 command_runner.go:130] > # certificate on any modification event.
	I0319 20:04:59.608309   43680 command_runner.go:130] > # metrics_cert = ""
	I0319 20:04:59.608320   43680 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0319 20:04:59.608331   43680 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0319 20:04:59.608340   43680 command_runner.go:130] > # metrics_key = ""
	I0319 20:04:59.608350   43680 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0319 20:04:59.608357   43680 command_runner.go:130] > [crio.tracing]
	I0319 20:04:59.608366   43680 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0319 20:04:59.608376   43680 command_runner.go:130] > # enable_tracing = false
	I0319 20:04:59.608389   43680 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0319 20:04:59.608399   43680 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0319 20:04:59.608414   43680 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0319 20:04:59.608424   43680 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0319 20:04:59.608434   43680 command_runner.go:130] > # CRI-O NRI configuration.
	I0319 20:04:59.608443   43680 command_runner.go:130] > [crio.nri]
	I0319 20:04:59.608453   43680 command_runner.go:130] > # Globally enable or disable NRI.
	I0319 20:04:59.608460   43680 command_runner.go:130] > # enable_nri = false
	I0319 20:04:59.608468   43680 command_runner.go:130] > # NRI socket to listen on.
	I0319 20:04:59.608478   43680 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0319 20:04:59.608489   43680 command_runner.go:130] > # NRI plugin directory to use.
	I0319 20:04:59.608497   43680 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0319 20:04:59.608509   43680 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0319 20:04:59.608520   43680 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0319 20:04:59.608535   43680 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0319 20:04:59.608545   43680 command_runner.go:130] > # nri_disable_connections = false
	I0319 20:04:59.608556   43680 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0319 20:04:59.608563   43680 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0319 20:04:59.608571   43680 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0319 20:04:59.608582   43680 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0319 20:04:59.608596   43680 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0319 20:04:59.608604   43680 command_runner.go:130] > [crio.stats]
	I0319 20:04:59.608616   43680 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0319 20:04:59.608628   43680 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0319 20:04:59.608639   43680 command_runner.go:130] > # stats_collection_period = 0
	I0319 20:04:59.608849   43680 cni.go:84] Creating CNI manager for ""
	I0319 20:04:59.608871   43680 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0319 20:04:59.608881   43680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:04:59.608910   43680 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-695944 NodeName:multinode-695944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:04:59.609058   43680 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-695944"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:04:59.609129   43680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:04:59.620530   43680 command_runner.go:130] > kubeadm
	I0319 20:04:59.620544   43680 command_runner.go:130] > kubectl
	I0319 20:04:59.620548   43680 command_runner.go:130] > kubelet
	I0319 20:04:59.620667   43680 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:04:59.620730   43680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:04:59.631327   43680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0319 20:04:59.653641   43680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:04:59.674595   43680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0319 20:04:59.696037   43680 ssh_runner.go:195] Run: grep 192.168.39.64	control-plane.minikube.internal$ /etc/hosts
	I0319 20:04:59.700724   43680 command_runner.go:130] > 192.168.39.64	control-plane.minikube.internal
	I0319 20:04:59.700984   43680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:04:59.854046   43680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:04:59.870237   43680 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944 for IP: 192.168.39.64
	I0319 20:04:59.870257   43680 certs.go:194] generating shared ca certs ...
	I0319 20:04:59.870273   43680 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:04:59.870459   43680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:04:59.870514   43680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:04:59.870526   43680 certs.go:256] generating profile certs ...
	I0319 20:04:59.870611   43680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/client.key
	I0319 20:04:59.870678   43680 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.key.e90732cd
	I0319 20:04:59.870712   43680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.key
	I0319 20:04:59.870723   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0319 20:04:59.870739   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0319 20:04:59.870751   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0319 20:04:59.870766   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0319 20:04:59.870778   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0319 20:04:59.870791   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0319 20:04:59.870808   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0319 20:04:59.870820   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0319 20:04:59.870870   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:04:59.870901   43680 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:04:59.870910   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:04:59.870933   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:04:59.870955   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:04:59.870978   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:04:59.871014   43680 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:04:59.871037   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem -> /usr/share/ca-certificates/17301.pem
	I0319 20:04:59.871060   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> /usr/share/ca-certificates/173012.pem
	I0319 20:04:59.871072   43680 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:04:59.871696   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:04:59.899925   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:04:59.927035   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:04:59.953712   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:04:59.981130   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:05:00.007909   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:05:00.034255   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:05:00.061925   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/multinode-695944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 20:05:00.090580   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:05:00.117514   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:05:00.144156   43680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:05:00.170936   43680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:05:00.189010   43680 ssh_runner.go:195] Run: openssl version
	I0319 20:05:00.195252   43680 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0319 20:05:00.195592   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:05:00.208153   43680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.213186   43680 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.213204   43680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.213235   43680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:05:00.219283   43680 command_runner.go:130] > 3ec20f2e
	I0319 20:05:00.219340   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:05:00.228912   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:05:00.240218   43680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.245033   43680 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.245153   43680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.245193   43680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:05:00.251394   43680 command_runner.go:130] > b5213941
	I0319 20:05:00.251444   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:05:00.261510   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:05:00.273402   43680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.278573   43680 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.278769   43680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.278825   43680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:05:00.285037   43680 command_runner.go:130] > 51391683
	I0319 20:05:00.285153   43680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:05:00.295782   43680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:05:00.301140   43680 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:05:00.301165   43680 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0319 20:05:00.301174   43680 command_runner.go:130] > Device: 253,1	Inode: 6292486     Links: 1
	I0319 20:05:00.301185   43680 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0319 20:05:00.301197   43680 command_runner.go:130] > Access: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301205   43680 command_runner.go:130] > Modify: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301216   43680 command_runner.go:130] > Change: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301226   43680 command_runner.go:130] >  Birth: 2024-03-19 19:58:39.739785765 +0000
	I0319 20:05:00.301279   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:05:00.307708   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.307755   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:05:00.313926   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.314143   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:05:00.319987   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.320274   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:05:00.326122   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.326308   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:05:00.332342   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.332404   43680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:05:00.338461   43680 command_runner.go:130] > Certificate will not expire
	I0319 20:05:00.338521   43680 kubeadm.go:391] StartCluster: {Name:multinode-695944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-695944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.105 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:05:00.338694   43680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:05:00.338740   43680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:05:00.380909   43680 command_runner.go:130] > e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc
	I0319 20:05:00.380941   43680 command_runner.go:130] > a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f
	I0319 20:05:00.380951   43680 command_runner.go:130] > b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751
	I0319 20:05:00.380960   43680 command_runner.go:130] > baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265
	I0319 20:05:00.380970   43680 command_runner.go:130] > 06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f
	I0319 20:05:00.380978   43680 command_runner.go:130] > 8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd
	I0319 20:05:00.380990   43680 command_runner.go:130] > ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4
	I0319 20:05:00.381003   43680 command_runner.go:130] > 7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a
	I0319 20:05:00.381034   43680 cri.go:89] found id: "e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc"
	I0319 20:05:00.381045   43680 cri.go:89] found id: "a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f"
	I0319 20:05:00.381051   43680 cri.go:89] found id: "b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751"
	I0319 20:05:00.381056   43680 cri.go:89] found id: "baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265"
	I0319 20:05:00.381060   43680 cri.go:89] found id: "06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f"
	I0319 20:05:00.381067   43680 cri.go:89] found id: "8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd"
	I0319 20:05:00.381073   43680 cri.go:89] found id: "ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4"
	I0319 20:05:00.381076   43680 cri.go:89] found id: "7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a"
	I0319 20:05:00.381081   43680 cri.go:89] found id: ""
	I0319 20:05:00.381149   43680 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.353270176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878935353242640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d59443cd-411c-4993-8e69-7eb4af93b195 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.354064596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be430e07-bb2c-4888-bf33-9b81582623d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.354148358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be430e07-bb2c-4888-bf33-9b81582623d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.354502158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be430e07-bb2c-4888-bf33-9b81582623d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.404127242Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f2ac76e-79fb-4b02-a0cc-ab6ed11f2134 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.404353621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f2ac76e-79fb-4b02-a0cc-ab6ed11f2134 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.405541695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4c9721d-0d9f-4f3b-a10c-44b4e9476f9c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.406017559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878935405993556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4c9721d-0d9f-4f3b-a10c-44b4e9476f9c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.406879221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47f41278-afae-4560-a8d1-c31f1931534b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.406966087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47f41278-afae-4560-a8d1-c31f1931534b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.407345484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47f41278-afae-4560-a8d1-c31f1931534b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.456330663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41b5d6dc-bef7-4071-8926-67832b3f0c96 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.456404939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41b5d6dc-bef7-4071-8926-67832b3f0c96 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.457999156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2956ee2-abcb-4bfb-83e4-154d73c9798a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.458433901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878935458409131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2956ee2-abcb-4bfb-83e4-154d73c9798a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.459004898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3033840b-f215-4aea-94f4-76b805bae382 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.459057915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3033840b-f215-4aea-94f4-76b805bae382 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.459407855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3033840b-f215-4aea-94f4-76b805bae382 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.510405347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cbfb3ce-afaa-478d-a8a5-28084d076069 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.510546910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cbfb3ce-afaa-478d-a8a5-28084d076069 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.512751957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67c96f19-c724-4295-a91f-eeb8e9fb9c1d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.513143384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710878935513120587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67c96f19-c724-4295-a91f-eeb8e9fb9c1d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.514085573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31ca8c14-2ef5-4a1c-b81c-5ba70bc251b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.514142431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31ca8c14-2ef5-4a1c-b81c-5ba70bc251b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:08:55 multinode-695944 crio[2866]: time="2024-03-19 20:08:55.514481763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e30cc4325b94ee3ee94a70bc87f83abd97d139108d42a55c060a4ba5ce2858,PodSandboxId:108d1fa562aee15d62d0883c5cc35fde9e44d036339197aa7e47e9c38f3fb291,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710878740381745109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c,PodSandboxId:a56687072b8daf0a88510f27e0e4c892f9440d17122d89471bed218983d1f9e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710878706961788714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811,PodSandboxId:0eeba5cb52be34cda990d89c3568d619e8b5f29c599219de7768bd695edcaa8d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710878706862285409,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba1f08a-8d4c-4103-a194-92e0a
c532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb3a2f1cbd37b8a7804bec6cf750935d6402d7de5f39e72553aadbe0c495768,PodSandboxId:dcc5215b26eea6e7076e11c846fb756920a1a76748e0fc10fa083f7e18e0b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710878706810689398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},An
notations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4,PodSandboxId:834998258ada32a0eb48afeb82c60843b33c0b72131a87921e59ba09b6ce086f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710878706749427698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08,PodSandboxId:559c72a4b9336ce209312dbefbda3352b65d0d4c305402e6d8a88d1cb4549ba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710878702904806761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3c37bb40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1,PodSandboxId:bb44e33a50e74d8a333a3cf53bed474ce771624d5eff27f5f169ec4062372449,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710878702887901760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299717a2811,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db,PodSandboxId:e0dbcfb5394dde6b8f2f878c227b74a90883087c5265b026564a63b54cd884cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710878702787385674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.kubernetes.container.hash: baf48e1f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab,PodSandboxId:98bf76a234cf09665aef935ae58a1ec32907e4c12af4e91b0e5c1e230cc9b995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710878702732727656,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2b4eaabe922b7deeaf3935896370895ca0b622b98bac77cefdf37e2ecac486,PodSandboxId:aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710878396841168682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-dlzz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b2c8147-6a4d-4820-9ebe-31e7cd960267,},Annotations:map[string]string{io.kubernetes.container.hash: 39f45b77,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f774cccbbfbfb6ee5aa19bb90d997e76024a26355c411e704b32c9001b4bbc,PodSandboxId:87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710878347098988744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e97606-2a07-4334-9e8c-9a0acc183fb4,},Annotations:map[string]string{io.kubernetes.container.hash: d0771d0,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f,PodSandboxId:29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710878346401562167,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-m5zqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc,},Annotations:map[string]string{io.kubernetes.container.hash: d14a85f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751,PodSandboxId:5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710878344702728740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w4nsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8ba1f08a-8d4c-4103-a194-92e0ac532af6,},Annotations:map[string]string{io.kubernetes.container.hash: 3afdb6a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265,PodSandboxId:ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710878344322105657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-84qh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e33b5c-7bd2-4cb8-96b9-3
6d54b1c6c8b,},Annotations:map[string]string{io.kubernetes.container.hash: 58481b3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f,PodSandboxId:4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710878323813887974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 813b1a2d255714d9958f607062ff9ad5,},A
nnotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd,PodSandboxId:4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710878323811305500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c148738974805b7fe15b2299
717a2811,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4,PodSandboxId:76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710878323773266674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21c033a65560c5069d7589a314cda60,},Annotations:map[string]string{io.k
ubernetes.container.hash: baf48e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a,PodSandboxId:e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710878323717001664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-695944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d421c74900624e16edf47e6f064a46b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3c37bb40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31ca8c14-2ef5-4a1c-b81c-5ba70bc251b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1e30cc4325b9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   108d1fa562aee       busybox-7fdf7869d9-dlzz4
	00358166ec138       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   a56687072b8da       coredns-76f75df574-m5zqf
	bc234f70b0e0b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   0eeba5cb52be3       kindnet-w4nsf
	7bb3a2f1cbd37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   dcc5215b26eea       storage-provisioner
	18a85fa20f090       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   834998258ada3       kube-proxy-84qh5
	af4ab955538b6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   559c72a4b9336       kube-apiserver-multinode-695944
	c8e1d961e7965       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   bb44e33a50e74       kube-controller-manager-multinode-695944
	9afa717c7fcfb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   e0dbcfb5394dd       etcd-multinode-695944
	a532a9a37c276       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   98bf76a234cf0       kube-scheduler-multinode-695944
	0b2b4eaabe922       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   aadfa828b775a       busybox-7fdf7869d9-dlzz4
	e8f774cccbbfb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   87cc29b1b8a27       storage-provisioner
	a674af55049f2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   29f352c4970d9       coredns-76f75df574-m5zqf
	b28a2897f4ff9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   5a672beea4c50       kindnet-w4nsf
	baf0f1559ad90       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      9 minutes ago       Exited              kube-proxy                0                   ebdbcba069313       kube-proxy-84qh5
	06c74ed2873c2       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      10 minutes ago      Exited              kube-scheduler            0                   4a82463a70f78       kube-scheduler-multinode-695944
	8e65071c13c79       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      10 minutes ago      Exited              kube-controller-manager   0                   4f74ea81616f9       kube-controller-manager-multinode-695944
	ea6d672313249       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   76225c7bbd791       etcd-multinode-695944
	7f2d48e900d9e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      10 minutes ago      Exited              kube-apiserver            0                   e9a8a9e5729a3       kube-apiserver-multinode-695944
	
	
	==> coredns [00358166ec138a5d0d211a392ceb4ff4d6899a810638f491bd7576059a06e04c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42803 - 39467 "HINFO IN 2115569935030661442.1698298732371478072. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014596738s
	
	
	==> coredns [a674af55049f2afffe601da4b4b1491165b331c318d02624fbc7bcbb1fd4f18f] <==
	[INFO] 10.244.1.2:44345 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001755423s
	[INFO] 10.244.1.2:46902 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100479s
	[INFO] 10.244.1.2:34240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103129s
	[INFO] 10.244.1.2:44764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001238995s
	[INFO] 10.244.1.2:44078 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088083s
	[INFO] 10.244.1.2:34798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112391s
	[INFO] 10.244.1.2:39889 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102421s
	[INFO] 10.244.0.3:55428 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197803s
	[INFO] 10.244.0.3:33089 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097542s
	[INFO] 10.244.0.3:34962 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007454s
	[INFO] 10.244.0.3:55544 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075689s
	[INFO] 10.244.1.2:36294 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00028126s
	[INFO] 10.244.1.2:51905 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171211s
	[INFO] 10.244.1.2:42128 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133258s
	[INFO] 10.244.1.2:41923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000185784s
	[INFO] 10.244.0.3:46893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282187s
	[INFO] 10.244.0.3:50415 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128375s
	[INFO] 10.244.0.3:42790 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191536s
	[INFO] 10.244.0.3:55861 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088957s
	[INFO] 10.244.1.2:47241 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321839s
	[INFO] 10.244.1.2:41760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015151s
	[INFO] 10.244.1.2:53712 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144737s
	[INFO] 10.244.1.2:47973 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174147s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-695944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-695944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=multinode-695944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T19_58_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 19:58:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-695944
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:08:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:58:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:58:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:58:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:05:05 +0000   Tue, 19 Mar 2024 19:59:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    multinode-695944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3780e196839483195c87cd874aaaec3
	  System UUID:                b3780e19-6839-4831-95c8-7cd874aaaec3
	  Boot ID:                    53258622-94ab-4256-b665-5c00d785c28d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-dlzz4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 coredns-76f75df574-m5zqf                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m52s
	  kube-system                 etcd-multinode-695944                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-w4nsf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m52s
	  kube-system                 kube-apiserver-multinode-695944             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-695944    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-84qh5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-scheduler-multinode-695944             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m50s                  kube-proxy       
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-695944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-695944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-695944 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m53s                  node-controller  Node multinode-695944 event: Registered Node multinode-695944 in Controller
	  Normal  NodeReady                9m50s                  kubelet          Node multinode-695944 status is now: NodeReady
	  Normal  Starting                 3m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m53s)  kubelet          Node multinode-695944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m53s)  kubelet          Node multinode-695944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node multinode-695944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m37s                  node-controller  Node multinode-695944 event: Registered Node multinode-695944 in Controller
	
	
	Name:               multinode-695944-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-695944-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=multinode-695944
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_19T20_05_50_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:05:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-695944-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:06:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:07:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:07:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:07:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 19 Mar 2024 20:06:19 +0000   Tue, 19 Mar 2024 20:07:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-695944-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 213dd72380c94515b19b4d2c8d1ecff8
	  System UUID:                213dd723-80c9-4515-b19b-4d2c8d1ecff8
	  Boot ID:                    432f3741-8a07-44e4-b952-0dd7781e43d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-xbp2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-278kv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m15s
	  kube-system                 kube-proxy-6x79z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m15s (x2 over 9m15s)  kubelet          Node multinode-695944-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x2 over 9m15s)  kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s (x2 over 9m15s)  kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m4s                   kubelet          Node multinode-695944-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)    kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)    kubelet          Node multinode-695944-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)    kubelet          Node multinode-695944-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                   node-controller  Node multinode-695944-m02 event: Registered Node multinode-695944-m02 in Controller
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-695944-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-695944-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061890] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077659] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.189961] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.127415] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.289647] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +5.066014] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.060150] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.848488] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[  +0.468867] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.821224] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.086902] kauditd_printk_skb: 41 callbacks suppressed
	[Mar19 19:59] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +0.072972] kauditd_printk_skb: 21 callbacks suppressed
	[ +49.699123] kauditd_printk_skb: 82 callbacks suppressed
	[Mar19 20:04] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.138617] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.199797] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +0.149799] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.296161] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.814247] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[Mar19 20:05] systemd-fstab-generator[3077]: Ignoring "noauto" option for root device
	[  +4.672038] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.961443] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.933688] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[ +16.905761] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [9afa717c7fcfb171f85847ec5167169c834280b14e0b1181b639e52c35aa27db] <==
	{"level":"info","ts":"2024-03-19T20:05:03.277003Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:05:03.277016Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:05:03.277293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c switched to configuration voters=(9064678732556469820)"}
	{"level":"info","ts":"2024-03-19T20:05:03.277385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","added-peer-id":"7dcc3547d111063c","added-peer-peer-urls":["https://192.168.39.64:2380"]}
	{"level":"info","ts":"2024-03-19T20:05:03.277522Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3619ef1effce12d","local-member-id":"7dcc3547d111063c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:05:03.279653Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:05:03.291017Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-19T20:05:03.29128Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dcc3547d111063c","initial-advertise-peer-urls":["https://192.168.39.64:2380"],"listen-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-19T20:05:03.29133Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:05:03.291427Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:05:03.291461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:05:04.339159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-19T20:05:04.339276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:05:04.339315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgPreVoteResp from 7dcc3547d111063c at term 2"}
	{"level":"info","ts":"2024-03-19T20:05:04.339345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.33937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.339397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dcc3547d111063c became leader at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.339427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 3"}
	{"level":"info","ts":"2024-03-19T20:05:04.349222Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7dcc3547d111063c","local-member-attributes":"{Name:multinode-695944 ClientURLs:[https://192.168.39.64:2379]}","request-path":"/0/members/7dcc3547d111063c/attributes","cluster-id":"c3619ef1effce12d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:05:04.349247Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:05:04.349501Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:05:04.349549Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:05:04.349283Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:05:04.351684Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.64:2379"}
	{"level":"info","ts":"2024-03-19T20:05:04.351812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [ea6d672313249c7b55aeaa36344995fc7cb8eb9b4d48944cb93ec200172af0f4] <==
	{"level":"warn","ts":"2024-03-19T19:59:40.434656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.794642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T19:59:40.434691Z","caller":"traceutil/trace.go:171","msg":"trace[459030470] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:452; }","duration":"232.942193ms","start":"2024-03-19T19:59:40.201739Z","end":"2024-03-19T19:59:40.434681Z","steps":["trace[459030470] 'agreement among raft nodes before linearized reading'  (duration: 232.794077ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:59:43.214021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.558398ms","expected-duration":"100ms","prefix":"","request":"header:<ID:449390572704474532 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-695944-m02\" mod_revision:476 > success:<request_put:<key:\"/registry/minions/multinode-695944-m02\" value_size:2892 >> failure:<request_range:<key:\"/registry/minions/multinode-695944-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-19T19:59:43.21437Z","caller":"traceutil/trace.go:171","msg":"trace[2060675690] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"359.334617ms","start":"2024-03-19T19:59:42.855008Z","end":"2024-03-19T19:59:43.214343Z","steps":["trace[2060675690] 'read index received'  (duration: 160.077522ms)","trace[2060675690] 'applied index is now lower than readState.Index'  (duration: 199.255582ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T19:59:43.2145Z","caller":"traceutil/trace.go:171","msg":"trace[1221424993] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"419.277568ms","start":"2024-03-19T19:59:42.795211Z","end":"2024-03-19T19:59:43.214489Z","steps":["trace[1221424993] 'process raft request'  (duration: 220.063963ms)","trace[1221424993] 'compare'  (duration: 198.261774ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T19:59:43.214695Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:59:42.795195Z","time spent":"419.35851ms","remote":"127.0.0.1:48166","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2938,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-695944-m02\" mod_revision:476 > success:<request_put:<key:\"/registry/minions/multinode-695944-m02\" value_size:2892 >> failure:<request_range:<key:\"/registry/minions/multinode-695944-m02\" > >"}
	{"level":"warn","ts":"2024-03-19T19:59:43.21486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.845338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-695944-m02\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-03-19T19:59:43.214937Z","caller":"traceutil/trace.go:171","msg":"trace[642578206] range","detail":"{range_begin:/registry/minions/multinode-695944-m02; range_end:; response_count:1; response_revision:480; }","duration":"359.942471ms","start":"2024-03-19T19:59:42.854975Z","end":"2024-03-19T19:59:43.214918Z","steps":["trace[642578206] 'agreement among raft nodes before linearized reading'  (duration: 359.852738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T19:59:43.214971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T19:59:42.854962Z","time spent":"359.999415ms","remote":"127.0.0.1:48166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2976,"request content":"key:\"/registry/minions/multinode-695944-m02\" "}
	{"level":"warn","ts":"2024-03-19T20:00:27.981018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.915593ms","expected-duration":"100ms","prefix":"","request":"header:<ID:449390572704474876 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-695944-m03.17be42dd8997384d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-695944-m03.17be42dd8997384d\" value_size:646 lease:449390572704474601 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-03-19T20:00:27.981305Z","caller":"traceutil/trace.go:171","msg":"trace[1422053545] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"188.638292ms","start":"2024-03-19T20:00:27.792648Z","end":"2024-03-19T20:00:27.981287Z","steps":["trace[1422053545] 'process raft request'  (duration: 188.585037ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:00:27.98131Z","caller":"traceutil/trace.go:171","msg":"trace[742572598] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"248.357577ms","start":"2024-03-19T20:00:27.732937Z","end":"2024-03-19T20:00:27.981294Z","steps":["trace[742572598] 'read index received'  (duration: 20.167µs)","trace[742572598] 'applied index is now lower than readState.Index'  (duration: 248.336326ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:00:27.981497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.542091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-695944-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:00:27.981633Z","caller":"traceutil/trace.go:171","msg":"trace[1283517086] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-695944-m03; range_end:; response_count:0; response_revision:578; }","duration":"248.647213ms","start":"2024-03-19T20:00:27.732916Z","end":"2024-03-19T20:00:27.981564Z","steps":["trace[1283517086] 'agreement among raft nodes before linearized reading'  (duration: 248.508903ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:00:27.981765Z","caller":"traceutil/trace.go:171","msg":"trace[607254771] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"248.374096ms","start":"2024-03-19T20:00:27.732914Z","end":"2024-03-19T20:00:27.981288Z","steps":["trace[607254771] 'process raft request'  (duration: 73.046538ms)","trace[607254771] 'compare'  (duration: 174.705789ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T20:03:26.746981Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-19T20:03:26.747148Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-695944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"]}
	{"level":"warn","ts":"2024-03-19T20:03:26.747259Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:03:26.747429Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:03:26.810421Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.64:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:03:26.810473Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.64:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T20:03:26.811929Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7dcc3547d111063c","current-leader-member-id":"7dcc3547d111063c"}
	{"level":"info","ts":"2024-03-19T20:03:26.814431Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:03:26.814629Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.64:2380"}
	{"level":"info","ts":"2024-03-19T20:03:26.814641Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-695944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.64:2380"],"advertise-client-urls":["https://192.168.39.64:2379"]}
	
	
	==> kernel <==
	 20:08:56 up 10 min,  0 users,  load average: 0.25, 0.14, 0.10
	Linux multinode-695944 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b28a2897f4ff960fe634f8c0cd43928124843c641e6aa7522a2aeb0c95234751] <==
	I0319 20:02:45.791842       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:02:55.805975       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:02:55.806181       1 main.go:227] handling current node
	I0319 20:02:55.806223       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:02:55.806244       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:02:55.806395       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:02:55.806416       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:03:05.816175       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:03:05.816538       1 main.go:227] handling current node
	I0319 20:03:05.816684       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:03:05.816795       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:03:05.816987       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:03:05.817049       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:03:15.830360       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:03:15.830495       1 main.go:227] handling current node
	I0319 20:03:15.830525       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:03:15.830545       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:03:15.830735       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:03:15.830772       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	I0319 20:03:25.841367       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:03:25.841448       1 main.go:227] handling current node
	I0319 20:03:25.841464       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:03:25.841474       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:03:25.841790       1 main.go:223] Handling node with IPs: map[192.168.39.105:{}]
	I0319 20:03:25.841842       1 main.go:250] Node multinode-695944-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bc234f70b0e0b72c0f0b41f209db068433cfedcd8e06b85a13ce0ea0fd6d8811] <==
	I0319 20:07:47.844343       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:07:57.849331       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:07:57.849703       1 main.go:227] handling current node
	I0319 20:07:57.849753       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:07:57.849789       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:08:07.856915       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:08:07.856972       1 main.go:227] handling current node
	I0319 20:08:07.856982       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:08:07.856989       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:08:17.868699       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:08:17.868762       1 main.go:227] handling current node
	I0319 20:08:17.868778       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:08:17.868784       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:08:27.878022       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:08:27.878122       1 main.go:227] handling current node
	I0319 20:08:27.878145       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:08:27.878162       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:08:37.891355       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:08:37.891496       1 main.go:227] handling current node
	I0319 20:08:37.891534       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:08:37.891558       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	I0319 20:08:47.897372       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0319 20:08:47.897473       1 main.go:227] handling current node
	I0319 20:08:47.897497       1 main.go:223] Handling node with IPs: map[192.168.39.233:{}]
	I0319 20:08:47.897515       1 main.go:250] Node multinode-695944-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7f2d48e900d9e4f7b9f5e5a0a43d7cd636d82326e79ff187758c0affc82a0b0a] <==
	I0319 20:03:26.774291       1 controller.go:161] Shutting down OpenAPI controller
	I0319 20:03:26.774321       1 controller.go:129] Ending legacy_token_tracking_controller
	I0319 20:03:26.774344       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0319 20:03:26.774373       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0319 20:03:26.774411       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0319 20:03:26.774450       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0319 20:03:26.774486       1 establishing_controller.go:87] Shutting down EstablishingController
	I0319 20:03:26.774516       1 naming_controller.go:302] Shutting down NamingConditionController
	I0319 20:03:26.774549       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0319 20:03:26.774745       1 available_controller.go:439] Shutting down AvailableConditionController
	I0319 20:03:26.776417       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0319 20:03:26.776452       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0319 20:03:26.776489       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0319 20:03:26.777728       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 20:03:26.777798       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:03:26.777862       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0319 20:03:26.777898       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0319 20:03:26.777925       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0319 20:03:26.777955       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0319 20:03:26.778028       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0319 20:03:26.778056       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 20:03:26.781699       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0319 20:03:26.781773       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:03:26.781843       1 controller.go:159] Shutting down quota evaluator
	I0319 20:03:26.781873       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [af4ab955538b64ec7b81c5d25e8342e1dbf538410d8e07d3c153977de9509e08] <==
	I0319 20:05:05.693922       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0319 20:05:05.750239       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:05:05.750360       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 20:05:05.793847       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 20:05:05.794090       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 20:05:05.794507       1 aggregator.go:165] initial CRD sync complete...
	I0319 20:05:05.794556       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 20:05:05.794646       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 20:05:05.798392       1 cache.go:39] Caches are synced for autoregister controller
	I0319 20:05:05.801892       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 20:05:05.867882       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 20:05:05.879039       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 20:05:05.881415       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 20:05:05.881965       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 20:05:05.887157       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 20:05:05.887257       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0319 20:05:05.902142       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 20:05:06.708410       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 20:05:08.084061       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 20:05:08.222242       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 20:05:08.238228       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 20:05:08.311283       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 20:05:08.318667       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 20:05:18.447886       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 20:05:18.498231       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8e65071c13c7943218732dc3a7e62fab51d2e0499a1b125f2a27da14783e66fd] <==
	I0319 19:59:57.554814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="34.602µs"
	I0319 19:59:57.713188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="6.397936ms"
	I0319 19:59:57.716082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.028µs"
	I0319 20:00:27.985928       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:00:27.985995       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m03\" does not exist"
	I0319 20:00:28.020695       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z5zqq"
	I0319 20:00:28.025986       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m03" podCIDRs=["10.244.2.0/24"]
	I0319 20:00:28.026226       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6kvnk"
	I0319 20:00:32.735902       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-695944-m03"
	I0319 20:00:32.735987       1 event.go:376] "Event occurred" object="multinode-695944-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-695944-m03 event: Registered Node multinode-695944-m03 in Controller"
	I0319 20:00:37.940270       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:01:09.498892       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:01:10.511731       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m03\" does not exist"
	I0319 20:01:10.511819       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:01:10.541110       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m03" podCIDRs=["10.244.3.0/24"]
	I0319 20:01:20.257330       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:02:02.792858       1 event.go:376] "Event occurred" object="multinode-695944-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-695944-m02 status is now: NodeNotReady"
	I0319 20:02:02.793014       1 event.go:376] "Event occurred" object="multinode-695944-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-695944-m03 status is now: NodeNotReady"
	I0319 20:02:02.808821       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-qsnxk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.816940       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-z5zqq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.824967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="15.734179ms"
	I0319 20:02:02.825543       1 event.go:376] "Event occurred" object="kube-system/kindnet-278kv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.827137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.883µs"
	I0319 20:02:02.839546       1 event.go:376] "Event occurred" object="kube-system/kindnet-6kvnk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:02:02.841312       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-6x79z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [c8e1d961e79651eca68563db4ebcc8b13c239e6a6ab4304bbe7c44051a9ea2f1] <==
	I0319 20:05:51.329841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="35.616µs"
	I0319 20:05:53.270543       1 event.go:376] "Event occurred" object="multinode-695944-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-695944-m02 event: Registered Node multinode-695944-m02 in Controller"
	I0319 20:05:58.832157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:05:58.859023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.79µs"
	I0319 20:05:58.875014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.876µs"
	I0319 20:06:02.690814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="7.414986ms"
	I0319 20:06:02.690916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="33.204µs"
	I0319 20:06:03.282235       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-xbp2r" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-xbp2r"
	I0319 20:06:18.483168       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:06:19.494678       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-695944-m03\" does not exist"
	I0319 20:06:19.494761       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:06:19.519139       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-695944-m03" podCIDRs=["10.244.2.0/24"]
	I0319 20:06:28.696197       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:06:34.483007       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-695944-m02"
	I0319 20:06:38.304807       1 event.go:376] "Event occurred" object="multinode-695944-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-695944-m03 event: Removing Node multinode-695944-m03 from Controller"
	I0319 20:07:13.322110       1 event.go:376] "Event occurred" object="multinode-695944-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-695944-m02 status is now: NodeNotReady"
	I0319 20:07:13.334812       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-xbp2r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:07:13.345523       1 event.go:376] "Event occurred" object="kube-system/kindnet-278kv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:07:13.349030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.112497ms"
	I0319 20:07:13.350764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.879µs"
	I0319 20:07:13.360441       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-6x79z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0319 20:07:38.187733       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-z5zqq"
	I0319 20:07:38.220499       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-z5zqq"
	I0319 20:07:38.220557       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-6kvnk"
	I0319 20:07:38.244138       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-6kvnk"
	
	
	==> kube-proxy [18a85fa20f090969781debb6e763f8d0f910f66c32d7191d08eb33c28c840be4] <==
	I0319 20:05:07.131946       1 server_others.go:72] "Using iptables proxy"
	I0319 20:05:07.197384       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	I0319 20:05:07.320787       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:05:07.320835       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:05:07.320855       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:05:07.325197       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:05:07.325464       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:05:07.325500       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:05:07.328520       1 config.go:188] "Starting service config controller"
	I0319 20:05:07.328672       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:05:07.328740       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:05:07.328759       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:05:07.329331       1 config.go:315] "Starting node config controller"
	I0319 20:05:07.329390       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:05:07.429666       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:05:07.429735       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 20:05:07.429564       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [baf0f1559ad902e641cf9f6ddefc33e6bf7471ba3b058a1ca59231c7be082265] <==
	I0319 19:59:04.698082       1 server_others.go:72] "Using iptables proxy"
	I0319 19:59:04.719668       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.64"]
	I0319 19:59:04.854127       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 19:59:04.854150       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 19:59:04.854164       1 server_others.go:168] "Using iptables Proxier"
	I0319 19:59:04.864763       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 19:59:04.866867       1 server.go:865] "Version info" version="v1.29.3"
	I0319 19:59:04.866889       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:59:04.883421       1 config.go:97] "Starting endpoint slice config controller"
	I0319 19:59:04.884741       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 19:59:04.884849       1 config.go:188] "Starting service config controller"
	I0319 19:59:04.884857       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 19:59:04.891002       1 config.go:315] "Starting node config controller"
	I0319 19:59:04.891014       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 19:59:04.984892       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 19:59:04.984936       1 shared_informer.go:318] Caches are synced for service config
	I0319 19:59:04.995377       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [06c74ed2873c2a103479302340b8d3ce6a6fe1016d7b42c33d451f897922c22f] <==
	W0319 19:58:46.683308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:58:46.683325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:58:47.478809       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 19:58:47.478923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 19:58:47.516213       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0319 19:58:47.516274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 19:58:47.517287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 19:58:47.518133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 19:58:47.580960       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 19:58:47.581179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 19:58:47.597931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 19:58:47.598119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 19:58:47.760700       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 19:58:47.760820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 19:58:47.777847       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 19:58:47.778032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 19:58:47.812068       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 19:58:47.812128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 19:58:48.101051       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 19:58:48.101126       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 19:58:50.850474       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:03:26.749915       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0319 20:03:26.750139       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0319 20:03:26.750688       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0319 20:03:26.761350       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a532a9a37c276bb0cb5545f53abacfb51f64a677849f35d92c2d14c8644889ab] <==
	I0319 20:05:03.870915       1 serving.go:380] Generated self-signed cert in-memory
	W0319 20:05:05.771343       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0319 20:05:05.771447       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:05:05.771494       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0319 20:05:05.771527       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0319 20:05:05.803320       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 20:05:05.803436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:05:05.805220       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 20:05:05.805310       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:05:05.806431       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 20:05:05.807700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 20:05:05.906510       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:07:02 multinode-695944 kubelet[3084]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:07:02 multinode-695944 kubelet[3084]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.181701    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc/crio-29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1: Error finding container 29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1: Status 404 returned error can't find the container with id 29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.182336    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podf9e97606-2a07-4334-9e8c-9a0acc183fb4/crio-87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3: Error finding container 87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3: Status 404 returned error can't find the container with id 87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.182544    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d421c74900624e16edf47e6f064a46b/crio-e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b: Error finding container e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b: Status 404 returned error can't find the container with id e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.183001    3084 manager.go:1116] Failed to create existing container: /kubepods/pod8ba1f08a-8d4c-4103-a194-92e0ac532af6/crio-5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf: Error finding container 5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf: Status 404 returned error can't find the container with id 5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.183338    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod813b1a2d255714d9958f607062ff9ad5/crio-4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8: Error finding container 4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8: Status 404 returned error can't find the container with id 4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.183513    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc148738974805b7fe15b2299717a2811/crio-4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8: Error finding container 4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8: Status 404 returned error can't find the container with id 4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.184162    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda21c033a65560c5069d7589a314cda60/crio-76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32: Error finding container 76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32: Status 404 returned error can't find the container with id 76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.184559    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod1b2c8147-6a4d-4820-9ebe-31e7cd960267/crio-aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5: Error finding container aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5: Status 404 returned error can't find the container with id aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5
	Mar 19 20:07:02 multinode-695944 kubelet[3084]: E0319 20:07:02.187869    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b/crio-ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691: Error finding container ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691: Status 404 returned error can't find the container with id ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.135375    3084 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:08:02 multinode-695944 kubelet[3084]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:08:02 multinode-695944 kubelet[3084]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:08:02 multinode-695944 kubelet[3084]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:08:02 multinode-695944 kubelet[3084]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.181156    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podf9e97606-2a07-4334-9e8c-9a0acc183fb4/crio-87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3: Error finding container 87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3: Status 404 returned error can't find the container with id 87cc29b1b8a27be436012730dfc69afa2176b718eb54848cdfe4b7bdd924eae3
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.181730    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d421c74900624e16edf47e6f064a46b/crio-e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b: Error finding container e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b: Status 404 returned error can't find the container with id e9a8a9e5729a34bf5446482b7c9fb0de953dd2df5f24bc8d3cd08ee23133441b
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.182226    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda21c033a65560c5069d7589a314cda60/crio-76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32: Error finding container 76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32: Status 404 returned error can't find the container with id 76225c7bbd79190f083ed917979f98684d1567cd83e0ead19763b2e13618cc32
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.182719    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod12e33b5c-7bd2-4cb8-96b9-36d54b1c6c8b/crio-ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691: Error finding container ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691: Status 404 returned error can't find the container with id ebdbcba0693133670cbaba07af7438d084233e397a9c322d33941bb14641b691
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.183012    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda0b1b3c8-edfc-4a3d-a99a-a30bb1bfcbbc/crio-29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1: Error finding container 29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1: Status 404 returned error can't find the container with id 29f352c4970d90cb190ac451506c493dbce1722584f8b855921ee0d03b65c0a1
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.183308    3084 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod1b2c8147-6a4d-4820-9ebe-31e7cd960267/crio-aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5: Error finding container aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5: Status 404 returned error can't find the container with id aadfa828b775a10eb43ca12a08e2f88a90c538d1c428b69d4b48809e02263fe5
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.183703    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc148738974805b7fe15b2299717a2811/crio-4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8: Error finding container 4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8: Status 404 returned error can't find the container with id 4f74ea81616f9ba82c04bc198873fe05da1f96a8ce46c5702c45e2488d0f52f8
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.184017    3084 manager.go:1116] Failed to create existing container: /kubepods/pod8ba1f08a-8d4c-4103-a194-92e0ac532af6/crio-5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf: Error finding container 5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf: Status 404 returned error can't find the container with id 5a672beea4c50da3f2177af93caebebae289da655de1986c6c53a8e3804cc1cf
	Mar 19 20:08:02 multinode-695944 kubelet[3084]: E0319 20:08:02.184298    3084 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod813b1a2d255714d9958f607062ff9ad5/crio-4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8: Error finding container 4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8: Status 404 returned error can't find the container with id 4a82463a70f782a3395cfa63f2924f375954f910b27057d3a229fe4ff3bea2d8
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:08:55.069992   45248 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18453-10028/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-695944 -n multinode-695944
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-695944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                    
x
+
TestPreload (278.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-554330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0319 20:14:13.890765   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 20:14:30.843937   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-554330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m15.137522394s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-554330 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-554330 image pull gcr.io/k8s-minikube/busybox: (3.0979507s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-554330
E0319 20:15:04.834983   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-554330: exit status 82 (2m0.465593583s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-554330"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-554330 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-03-19 20:16:53.912983898 +0000 UTC m=+4338.033603662
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-554330 -n test-preload-554330
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-554330 -n test-preload-554330: exit status 3 (18.463938829s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:17:12.372584   47655 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	E0319 20:17:12.372607   47655 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-554330" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-554330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-554330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-554330: (1.088142004s)
--- FAIL: TestPreload (278.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (451.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m0.752422979s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-853797] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-853797" primary control-plane node in "kubernetes-upgrade-853797" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:19:10.709227   49076 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:19:10.709506   49076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:19:10.709515   49076 out.go:304] Setting ErrFile to fd 2...
	I0319 20:19:10.709519   49076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:19:10.709714   49076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:19:10.710161   49076 out.go:298] Setting JSON to false
	I0319 20:19:10.710873   49076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7249,"bootTime":1710872302,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:19:10.710940   49076 start.go:139] virtualization: kvm guest
	I0319 20:19:10.712358   49076 out.go:177] * [kubernetes-upgrade-853797] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:19:10.715259   49076 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:19:10.714171   49076 notify.go:220] Checking for updates...
	I0319 20:19:10.717757   49076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:19:10.719031   49076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:19:10.720443   49076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:19:10.722253   49076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:19:10.723902   49076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:19:10.725551   49076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:19:10.761344   49076 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:19:10.762708   49076 start.go:297] selected driver: kvm2
	I0319 20:19:10.762725   49076 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:19:10.762748   49076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:19:10.763681   49076 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:19:10.774182   49076 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:19:10.789386   49076 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:19:10.789430   49076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 20:19:10.789604   49076 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 20:19:10.789665   49076 cni.go:84] Creating CNI manager for ""
	I0319 20:19:10.789681   49076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:19:10.789687   49076 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 20:19:10.789743   49076 start.go:340] cluster config:
	{Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:19:10.789827   49076 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:19:10.791953   49076 out.go:177] * Starting "kubernetes-upgrade-853797" primary control-plane node in "kubernetes-upgrade-853797" cluster
	I0319 20:19:10.793783   49076 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:19:10.793830   49076 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0319 20:19:10.793847   49076 cache.go:56] Caching tarball of preloaded images
	I0319 20:19:10.793923   49076 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:19:10.793939   49076 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0319 20:19:10.794336   49076 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/config.json ...
	I0319 20:19:10.794368   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/config.json: {Name:mke8b888bf9f70a228d5f527a6b8b4a4ab99067d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:19:10.794524   49076 start.go:360] acquireMachinesLock for kubernetes-upgrade-853797: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:19:36.114323   49076 start.go:364] duration metric: took 25.319748665s to acquireMachinesLock for "kubernetes-upgrade-853797"
	I0319 20:19:36.114387   49076 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:19:36.114500   49076 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 20:19:36.116773   49076 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 20:19:36.116954   49076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:19:36.117015   49076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:19:36.134910   49076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0319 20:19:36.135376   49076 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:19:36.135997   49076 main.go:141] libmachine: Using API Version  1
	I0319 20:19:36.136017   49076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:19:36.136393   49076 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:19:36.136577   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:19:36.136728   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:19:36.136892   49076 start.go:159] libmachine.API.Create for "kubernetes-upgrade-853797" (driver="kvm2")
	I0319 20:19:36.136919   49076 client.go:168] LocalClient.Create starting
	I0319 20:19:36.136953   49076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 20:19:36.136997   49076 main.go:141] libmachine: Decoding PEM data...
	I0319 20:19:36.137016   49076 main.go:141] libmachine: Parsing certificate...
	I0319 20:19:36.137089   49076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 20:19:36.137114   49076 main.go:141] libmachine: Decoding PEM data...
	I0319 20:19:36.137135   49076 main.go:141] libmachine: Parsing certificate...
	I0319 20:19:36.137160   49076 main.go:141] libmachine: Running pre-create checks...
	I0319 20:19:36.137176   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .PreCreateCheck
	I0319 20:19:36.137503   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetConfigRaw
	I0319 20:19:36.137867   49076 main.go:141] libmachine: Creating machine...
	I0319 20:19:36.137881   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .Create
	I0319 20:19:36.138011   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Creating KVM machine...
	I0319 20:19:36.139217   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found existing default KVM network
	I0319 20:19:36.140226   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:36.140057   49370 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:d3:78} reservation:<nil>}
	I0319 20:19:36.141156   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:36.141071   49370 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fa50}
	I0319 20:19:36.141192   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | created network xml: 
	I0319 20:19:36.141205   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | <network>
	I0319 20:19:36.141215   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |   <name>mk-kubernetes-upgrade-853797</name>
	I0319 20:19:36.141229   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |   <dns enable='no'/>
	I0319 20:19:36.141239   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |   
	I0319 20:19:36.141254   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0319 20:19:36.141266   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |     <dhcp>
	I0319 20:19:36.141294   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0319 20:19:36.141320   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |     </dhcp>
	I0319 20:19:36.141345   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |   </ip>
	I0319 20:19:36.141363   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG |   
	I0319 20:19:36.141373   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | </network>
	I0319 20:19:36.141380   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | 
	I0319 20:19:36.146415   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | trying to create private KVM network mk-kubernetes-upgrade-853797 192.168.50.0/24...
	I0319 20:19:36.213676   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | private KVM network mk-kubernetes-upgrade-853797 192.168.50.0/24 created
	I0319 20:19:36.213715   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:36.213622   49370 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:19:36.213736   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797 ...
	I0319 20:19:36.213780   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 20:19:36.213803   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 20:19:36.443116   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:36.442973   49370 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa...
	I0319 20:19:36.560300   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:36.560163   49370 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/kubernetes-upgrade-853797.rawdisk...
	I0319 20:19:36.560336   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Writing magic tar header
	I0319 20:19:36.560354   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Writing SSH key tar header
	I0319 20:19:36.560447   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:36.560362   49370 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797 ...
	I0319 20:19:36.560495   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797
	I0319 20:19:36.560529   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797 (perms=drwx------)
	I0319 20:19:36.560560   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 20:19:36.560571   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 20:19:36.560590   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:19:36.560604   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 20:19:36.560618   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 20:19:36.560630   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home/jenkins
	I0319 20:19:36.560650   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 20:19:36.560668   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Checking permissions on dir: /home
	I0319 20:19:36.560682   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 20:19:36.560699   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 20:19:36.560712   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 20:19:36.560726   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Creating domain...
	I0319 20:19:36.560739   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Skipping /home - not owner
	I0319 20:19:36.561746   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) define libvirt domain using xml: 
	I0319 20:19:36.561765   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) <domain type='kvm'>
	I0319 20:19:36.561774   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <name>kubernetes-upgrade-853797</name>
	I0319 20:19:36.561786   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <memory unit='MiB'>2200</memory>
	I0319 20:19:36.561800   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <vcpu>2</vcpu>
	I0319 20:19:36.561811   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <features>
	I0319 20:19:36.561821   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <acpi/>
	I0319 20:19:36.561836   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <apic/>
	I0319 20:19:36.561871   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <pae/>
	I0319 20:19:36.561897   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     
	I0319 20:19:36.561909   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   </features>
	I0319 20:19:36.561927   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <cpu mode='host-passthrough'>
	I0319 20:19:36.561940   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   
	I0319 20:19:36.561952   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   </cpu>
	I0319 20:19:36.561965   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <os>
	I0319 20:19:36.561977   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <type>hvm</type>
	I0319 20:19:36.561988   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <boot dev='cdrom'/>
	I0319 20:19:36.562000   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <boot dev='hd'/>
	I0319 20:19:36.562019   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <bootmenu enable='no'/>
	I0319 20:19:36.562033   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   </os>
	I0319 20:19:36.562045   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   <devices>
	I0319 20:19:36.562057   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <disk type='file' device='cdrom'>
	I0319 20:19:36.562073   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/boot2docker.iso'/>
	I0319 20:19:36.562123   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <target dev='hdc' bus='scsi'/>
	I0319 20:19:36.562146   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <readonly/>
	I0319 20:19:36.562156   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </disk>
	I0319 20:19:36.562171   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <disk type='file' device='disk'>
	I0319 20:19:36.562187   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 20:19:36.562199   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/kubernetes-upgrade-853797.rawdisk'/>
	I0319 20:19:36.562218   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <target dev='hda' bus='virtio'/>
	I0319 20:19:36.562228   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </disk>
	I0319 20:19:36.562239   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <interface type='network'>
	I0319 20:19:36.562255   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <source network='mk-kubernetes-upgrade-853797'/>
	I0319 20:19:36.562268   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <model type='virtio'/>
	I0319 20:19:36.562279   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </interface>
	I0319 20:19:36.562289   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <interface type='network'>
	I0319 20:19:36.562300   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <source network='default'/>
	I0319 20:19:36.562310   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <model type='virtio'/>
	I0319 20:19:36.562319   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </interface>
	I0319 20:19:36.562346   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <serial type='pty'>
	I0319 20:19:36.562384   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <target port='0'/>
	I0319 20:19:36.562398   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </serial>
	I0319 20:19:36.562409   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <console type='pty'>
	I0319 20:19:36.562423   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <target type='serial' port='0'/>
	I0319 20:19:36.562434   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </console>
	I0319 20:19:36.562446   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     <rng model='virtio'>
	I0319 20:19:36.562455   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)       <backend model='random'>/dev/random</backend>
	I0319 20:19:36.562467   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     </rng>
	I0319 20:19:36.562476   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     
	I0319 20:19:36.562484   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)     
	I0319 20:19:36.562494   49076 main.go:141] libmachine: (kubernetes-upgrade-853797)   </devices>
	I0319 20:19:36.562503   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) </domain>
	I0319 20:19:36.562514   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) 
	I0319 20:19:36.566913   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:46:ae:21 in network default
	I0319 20:19:36.567386   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:36.567402   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring networks are active...
	I0319 20:19:36.567992   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring network default is active
	I0319 20:19:36.568231   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring network mk-kubernetes-upgrade-853797 is active
	I0319 20:19:36.568706   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Getting domain xml...
	I0319 20:19:36.569416   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Creating domain...
	I0319 20:19:37.812207   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Waiting to get IP...
	I0319 20:19:37.813177   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:37.813631   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:37.813665   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:37.813599   49370 retry.go:31] will retry after 309.865239ms: waiting for machine to come up
	I0319 20:19:38.126914   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:38.127467   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:38.127499   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:38.127410   49370 retry.go:31] will retry after 359.255385ms: waiting for machine to come up
	I0319 20:19:38.488030   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:38.488646   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:38.488676   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:38.488605   49370 retry.go:31] will retry after 450.56048ms: waiting for machine to come up
	I0319 20:19:38.941318   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:38.941819   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:38.941856   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:38.941774   49370 retry.go:31] will retry after 520.285295ms: waiting for machine to come up
	I0319 20:19:39.463583   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:39.464032   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:39.464055   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:39.464002   49370 retry.go:31] will retry after 602.829742ms: waiting for machine to come up
	I0319 20:19:40.069179   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:40.069671   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:40.069700   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:40.069625   49370 retry.go:31] will retry after 777.626086ms: waiting for machine to come up
	I0319 20:19:40.848570   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:40.849049   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:40.849085   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:40.848967   49370 retry.go:31] will retry after 985.062561ms: waiting for machine to come up
	I0319 20:19:41.835615   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:41.836034   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:41.836066   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:41.835980   49370 retry.go:31] will retry after 1.42182424s: waiting for machine to come up
	I0319 20:19:43.259516   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:43.260043   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:43.260064   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:43.260003   49370 retry.go:31] will retry after 1.813247398s: waiting for machine to come up
	I0319 20:19:45.076267   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:45.076805   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:45.076834   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:45.076754   49370 retry.go:31] will retry after 1.819844547s: waiting for machine to come up
	I0319 20:19:46.898501   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:46.898933   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:46.898964   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:46.898874   49370 retry.go:31] will retry after 2.668921256s: waiting for machine to come up
	I0319 20:19:49.569930   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:49.570372   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:49.570397   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:49.570331   49370 retry.go:31] will retry after 2.215057157s: waiting for machine to come up
	I0319 20:19:51.787234   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:51.787621   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:51.787647   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:51.787588   49370 retry.go:31] will retry after 3.858886628s: waiting for machine to come up
	I0319 20:19:55.650706   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:19:55.651097   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:19:55.651124   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:19:55.651043   49370 retry.go:31] will retry after 4.76119008s: waiting for machine to come up
	I0319 20:20:00.417356   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.417736   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has current primary IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.417775   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Found IP for machine: 192.168.50.116
	I0319 20:20:00.417788   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Reserving static IP address...
	I0319 20:20:00.418127   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-853797", mac: "52:54:00:39:a8:7f", ip: "192.168.50.116"} in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.488307   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Getting to WaitForSSH function...
	I0319 20:20:00.488340   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Reserved static IP address: 192.168.50.116
	I0319 20:20:00.488354   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Waiting for SSH to be available...
	I0319 20:20:00.491217   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.491727   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:00.491761   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.491906   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Using SSH client type: external
	I0319 20:20:00.491931   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa (-rw-------)
	I0319 20:20:00.491974   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:20:00.491987   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | About to run SSH command:
	I0319 20:20:00.491998   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | exit 0
	I0319 20:20:00.616578   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | SSH cmd err, output: <nil>: 
	I0319 20:20:00.616860   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) KVM machine creation complete!
	I0319 20:20:00.617126   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetConfigRaw
	I0319 20:20:00.617747   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:00.617948   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:00.618135   49076 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:20:00.618151   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetState
	I0319 20:20:00.619530   49076 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:20:00.619545   49076 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:20:00.619556   49076 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:20:00.619562   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:00.621741   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.622011   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:00.622043   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.622182   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:00.622372   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.622507   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.622623   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:00.622761   49076 main.go:141] libmachine: Using SSH client type: native
	I0319 20:20:00.622964   49076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:20:00.622977   49076 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:20:00.723924   49076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:20:00.723952   49076 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:20:00.723963   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:00.726805   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.727217   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:00.727241   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.727403   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:00.727604   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.727789   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.727949   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:00.728094   49076 main.go:141] libmachine: Using SSH client type: native
	I0319 20:20:00.728338   49076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:20:00.728352   49076 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:20:00.829903   49076 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:20:00.829988   49076 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:20:00.829997   49076 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:20:00.830005   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:20:00.830265   49076 buildroot.go:166] provisioning hostname "kubernetes-upgrade-853797"
	I0319 20:20:00.830296   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:20:00.830475   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:00.832991   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.833336   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:00.833363   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.833559   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:00.833735   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.833911   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.834017   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:00.834165   49076 main.go:141] libmachine: Using SSH client type: native
	I0319 20:20:00.834409   49076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:20:00.834430   49076 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-853797 && echo "kubernetes-upgrade-853797" | sudo tee /etc/hostname
	I0319 20:20:00.952658   49076 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-853797
	
	I0319 20:20:00.952692   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:00.955497   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.955994   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:00.956026   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:00.956188   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:00.956389   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.956554   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:00.956684   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:00.956815   49076 main.go:141] libmachine: Using SSH client type: native
	I0319 20:20:00.956993   49076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:20:00.957010   49076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-853797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-853797/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-853797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:20:01.066413   49076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:20:01.066457   49076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:20:01.066505   49076 buildroot.go:174] setting up certificates
	I0319 20:20:01.066519   49076 provision.go:84] configureAuth start
	I0319 20:20:01.066533   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:20:01.066871   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:20:01.069390   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.069704   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.069736   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.069865   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.072190   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.072579   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.072620   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.072819   49076 provision.go:143] copyHostCerts
	I0319 20:20:01.072889   49076 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:20:01.072902   49076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:20:01.073054   49076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:20:01.073188   49076 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:20:01.073201   49076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:20:01.073229   49076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:20:01.073301   49076 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:20:01.073312   49076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:20:01.073338   49076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:20:01.073403   49076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-853797 san=[127.0.0.1 192.168.50.116 kubernetes-upgrade-853797 localhost minikube]
	I0319 20:20:01.128995   49076 provision.go:177] copyRemoteCerts
	I0319 20:20:01.129052   49076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:20:01.129073   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.131638   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.132065   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.132095   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.132296   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:01.132473   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.132639   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:01.132808   49076 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:20:01.214808   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:20:01.247500   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0319 20:20:01.275363   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:20:01.302229   49076 provision.go:87] duration metric: took 235.696261ms to configureAuth
	I0319 20:20:01.302261   49076 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:20:01.302482   49076 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:20:01.302581   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.305026   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.305421   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.305452   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.305619   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:01.305825   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.306015   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.306177   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:01.306399   49076 main.go:141] libmachine: Using SSH client type: native
	I0319 20:20:01.306562   49076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:20:01.306581   49076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:20:01.597152   49076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:20:01.597178   49076 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:20:01.597186   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetURL
	I0319 20:20:01.598426   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | Using libvirt version 6000000
	I0319 20:20:01.600863   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.601171   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.601197   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.601377   49076 main.go:141] libmachine: Docker is up and running!
	I0319 20:20:01.601387   49076 main.go:141] libmachine: Reticulating splines...
	I0319 20:20:01.601393   49076 client.go:171] duration metric: took 25.464464846s to LocalClient.Create
	I0319 20:20:01.601421   49076 start.go:167] duration metric: took 25.46453259s to libmachine.API.Create "kubernetes-upgrade-853797"
	I0319 20:20:01.601434   49076 start.go:293] postStartSetup for "kubernetes-upgrade-853797" (driver="kvm2")
	I0319 20:20:01.601453   49076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:20:01.601472   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:01.601733   49076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:20:01.601838   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.604045   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.604412   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.604440   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.604553   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:01.604723   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.604849   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:01.605005   49076 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:20:01.690024   49076 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:20:01.695201   49076 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:20:01.695223   49076 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:20:01.695284   49076 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:20:01.695374   49076 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:20:01.695494   49076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:20:01.706422   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:20:01.733257   49076 start.go:296] duration metric: took 131.801475ms for postStartSetup
	I0319 20:20:01.733317   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetConfigRaw
	I0319 20:20:01.733935   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:20:01.736976   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.737317   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.737360   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.737589   49076 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/config.json ...
	I0319 20:20:01.737808   49076 start.go:128] duration metric: took 25.623295539s to createHost
	I0319 20:20:01.737839   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.740093   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.740462   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.740498   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.740616   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:01.740784   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.740974   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.741106   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:01.741278   49076 main.go:141] libmachine: Using SSH client type: native
	I0319 20:20:01.741431   49076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:20:01.741442   49076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0319 20:20:01.841853   49076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879601.826357877
	
	I0319 20:20:01.841884   49076 fix.go:216] guest clock: 1710879601.826357877
	I0319 20:20:01.841896   49076 fix.go:229] Guest: 2024-03-19 20:20:01.826357877 +0000 UTC Remote: 2024-03-19 20:20:01.737824165 +0000 UTC m=+51.098108775 (delta=88.533712ms)
	I0319 20:20:01.841940   49076 fix.go:200] guest clock delta is within tolerance: 88.533712ms
	I0319 20:20:01.841948   49076 start.go:83] releasing machines lock for "kubernetes-upgrade-853797", held for 25.72759667s
	I0319 20:20:01.841984   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:01.842258   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:20:01.845150   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.845487   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.845518   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.845699   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:01.846335   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:01.846522   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:20:01.846624   49076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:20:01.846671   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.846740   49076 ssh_runner.go:195] Run: cat /version.json
	I0319 20:20:01.846758   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:20:01.849620   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.849775   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.850049   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.850075   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.850115   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:01.850141   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:01.850404   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:01.850419   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:20:01.850595   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.850600   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:20:01.850761   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:01.850768   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:20:01.850935   49076 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:20:01.850938   49076 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:20:01.968708   49076 ssh_runner.go:195] Run: systemctl --version
	I0319 20:20:01.975669   49076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:20:02.144839   49076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:20:02.152102   49076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:20:02.152187   49076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:20:02.170103   49076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:20:02.170129   49076 start.go:494] detecting cgroup driver to use...
	I0319 20:20:02.170192   49076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:20:02.194244   49076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:20:02.210054   49076 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:20:02.210120   49076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:20:02.224921   49076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:20:02.239986   49076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:20:02.365513   49076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:20:02.538527   49076 docker.go:233] disabling docker service ...
	I0319 20:20:02.538601   49076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:20:02.556525   49076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:20:02.574020   49076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:20:02.697970   49076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:20:02.839475   49076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:20:02.856069   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:20:02.879818   49076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:20:02.879908   49076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:20:02.893639   49076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:20:02.893700   49076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:20:02.908778   49076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:20:02.922892   49076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:20:02.936748   49076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:20:02.949819   49076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:20:02.961195   49076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:20:02.961244   49076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:20:02.978577   49076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:20:02.992092   49076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:20:03.134181   49076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:20:03.308516   49076 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:20:03.308598   49076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:20:03.314220   49076 start.go:562] Will wait 60s for crictl version
	I0319 20:20:03.314281   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:03.319159   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:20:03.366711   49076 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:20:03.366837   49076 ssh_runner.go:195] Run: crio --version
	I0319 20:20:03.401507   49076 ssh_runner.go:195] Run: crio --version
	I0319 20:20:03.450953   49076 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:20:03.452377   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:20:03.455029   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:03.455330   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:19:52 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:20:03.455369   49076 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:20:03.455513   49076 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:20:03.460556   49076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:20:03.477124   49076 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:20:03.477266   49076 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:20:03.477328   49076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:20:03.529954   49076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:20:03.530023   49076 ssh_runner.go:195] Run: which lz4
	I0319 20:20:03.535082   49076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0319 20:20:03.540007   49076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:20:03.540033   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:20:05.661173   49076 crio.go:462] duration metric: took 2.12613853s to copy over tarball
	I0319 20:20:05.661263   49076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:20:08.929372   49076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.268082301s)
	I0319 20:20:08.929397   49076 crio.go:469] duration metric: took 3.268193627s to extract the tarball
	I0319 20:20:08.929404   49076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:20:08.973794   49076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:20:09.027821   49076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:20:09.027849   49076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:20:09.027933   49076 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:20:09.027951   49076 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:20:09.027962   49076 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:20:09.027933   49076 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:20:09.028001   49076 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:20:09.028004   49076 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:20:09.028217   49076 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:20:09.028428   49076 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:20:09.030025   49076 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:20:09.030066   49076 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:20:09.030207   49076 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:20:09.030217   49076 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:20:09.030376   49076 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:20:09.030370   49076 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:20:09.030447   49076 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:20:09.030524   49076 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:20:09.172749   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:20:09.176398   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:20:09.185798   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:20:09.187235   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:20:09.211114   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:20:09.212499   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:20:09.227428   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:20:09.286944   49076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:20:09.286986   49076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:20:09.287031   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.287029   49076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:20:09.287103   49076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:20:09.287126   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.376573   49076 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:20:09.376621   49076 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:20:09.376677   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.388040   49076 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:20:09.388096   49076 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:20:09.388161   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.410479   49076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:20:09.410524   49076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:20:09.410578   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.424394   49076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:20:09.424419   49076 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:20:09.424448   49076 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:20:09.424442   49076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:20:09.424461   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:20:09.424486   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.424488   49076 ssh_runner.go:195] Run: which crictl
	I0319 20:20:09.424567   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:20:09.424643   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:20:09.424684   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:20:09.424743   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:20:09.536171   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:20:09.536312   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:20:09.556850   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:20:09.556913   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:20:09.556957   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:20:09.557030   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:20:09.557094   49076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:20:09.607443   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:20:09.607582   49076 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:20:10.033712   49076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:20:10.181897   49076 cache_images.go:92] duration metric: took 1.154029115s to LoadCachedImages
	W0319 20:20:10.181995   49076 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0319 20:20:10.182013   49076 kubeadm.go:928] updating node { 192.168.50.116 8443 v1.20.0 crio true true} ...
	I0319 20:20:10.182155   49076 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-853797 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:20:10.182255   49076 ssh_runner.go:195] Run: crio config
	I0319 20:20:10.239573   49076 cni.go:84] Creating CNI manager for ""
	I0319 20:20:10.239596   49076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:20:10.239607   49076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:20:10.239623   49076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-853797 NodeName:kubernetes-upgrade-853797 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:20:10.239754   49076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-853797"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:20:10.239812   49076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:20:10.251958   49076 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:20:10.252022   49076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:20:10.264774   49076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0319 20:20:10.283442   49076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:20:10.302623   49076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0319 20:20:10.322660   49076 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0319 20:20:10.327619   49076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:20:10.343839   49076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:20:10.486632   49076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:20:10.509868   49076 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797 for IP: 192.168.50.116
	I0319 20:20:10.509893   49076 certs.go:194] generating shared ca certs ...
	I0319 20:20:10.509912   49076 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:10.510087   49076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:20:10.510140   49076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:20:10.510152   49076 certs.go:256] generating profile certs ...
	I0319 20:20:10.510219   49076 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.key
	I0319 20:20:10.510238   49076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.crt with IP's: []
	I0319 20:20:10.662074   49076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.crt ...
	I0319 20:20:10.662106   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.crt: {Name:mk267468340ea83c2868e404c3ae47fc2aa7a5fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:10.662299   49076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.key ...
	I0319 20:20:10.662319   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.key: {Name:mke08a6ee79e8043cb4e3ad622c4218d1043127b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:10.662420   49076 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key.d15cc93c
	I0319 20:20:10.662440   49076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt.d15cc93c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.116]
	I0319 20:20:11.042003   49076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt.d15cc93c ...
	I0319 20:20:11.042029   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt.d15cc93c: {Name:mk77853a4265ea056438d948f2bd758048cc0385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:11.042182   49076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key.d15cc93c ...
	I0319 20:20:11.042195   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key.d15cc93c: {Name:mk81fc734a7da77c61a61e25ba383e6c24577566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:11.042259   49076 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt.d15cc93c -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt
	I0319 20:20:11.042338   49076 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key.d15cc93c -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key
	I0319 20:20:11.042394   49076 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.key
	I0319 20:20:11.042409   49076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.crt with IP's: []
	I0319 20:20:11.188165   49076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.crt ...
	I0319 20:20:11.188197   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.crt: {Name:mk6d9b3f6b20d0c73066dcc754a522d354ed0369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:11.188393   49076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.key ...
	I0319 20:20:11.188415   49076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.key: {Name:mkef153de0719ae275377f5c5d96c786bb757e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:20:11.188607   49076 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:20:11.188644   49076 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:20:11.188654   49076 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:20:11.188674   49076 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:20:11.188697   49076 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:20:11.188718   49076 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:20:11.188767   49076 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:20:11.189359   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:20:11.228204   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:20:11.259318   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:20:11.299195   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:20:11.356437   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0319 20:20:11.402388   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:20:11.443826   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:20:11.490344   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:20:11.522538   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:20:11.559525   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:20:11.598649   49076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:20:11.634996   49076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:20:11.660728   49076 ssh_runner.go:195] Run: openssl version
	I0319 20:20:11.667743   49076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:20:11.681303   49076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:20:11.687212   49076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:20:11.687276   49076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:20:11.694247   49076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:20:11.707208   49076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:20:11.720666   49076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:20:11.727467   49076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:20:11.727561   49076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:20:11.735943   49076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:20:11.754312   49076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:20:11.773564   49076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:20:11.781298   49076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:20:11.781368   49076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:20:11.790933   49076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:20:11.809790   49076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:20:11.815371   49076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 20:20:11.815439   49076 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:20:11.815538   49076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:20:11.815598   49076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:20:11.858994   49076 cri.go:89] found id: ""
	I0319 20:20:11.859071   49076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 20:20:11.874300   49076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:20:11.888034   49076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:20:11.904445   49076 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:20:11.904466   49076 kubeadm.go:156] found existing configuration files:
	
	I0319 20:20:11.904522   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:20:11.917081   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:20:11.917152   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:20:11.932071   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:20:11.947428   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:20:11.947510   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:20:11.962764   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:20:11.974408   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:20:11.974468   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:20:11.987908   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:20:11.999945   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:20:12.000026   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:20:12.016886   49076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:20:12.386547   49076 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:22:10.873650   49076 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:22:10.873723   49076 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:22:10.875456   49076 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:22:10.875526   49076 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:22:10.875618   49076 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:22:10.875734   49076 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:22:10.875850   49076 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:22:10.875929   49076 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:22:10.878109   49076 out.go:204]   - Generating certificates and keys ...
	I0319 20:22:10.878168   49076 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:22:10.878229   49076 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:22:10.878355   49076 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 20:22:10.878446   49076 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 20:22:10.878533   49076 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 20:22:10.878603   49076 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 20:22:10.878681   49076 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 20:22:10.878819   49076 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-853797 localhost] and IPs [192.168.50.116 127.0.0.1 ::1]
	I0319 20:22:10.878894   49076 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 20:22:10.879041   49076 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-853797 localhost] and IPs [192.168.50.116 127.0.0.1 ::1]
	I0319 20:22:10.879138   49076 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 20:22:10.879242   49076 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 20:22:10.879306   49076 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 20:22:10.879386   49076 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:22:10.879466   49076 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:22:10.879545   49076 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:22:10.879622   49076 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:22:10.879686   49076 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:22:10.879791   49076 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:22:10.879915   49076 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:22:10.879982   49076 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:22:10.880065   49076 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:22:10.881620   49076 out.go:204]   - Booting up control plane ...
	I0319 20:22:10.881702   49076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:22:10.881785   49076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:22:10.881867   49076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:22:10.881965   49076 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:22:10.882145   49076 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:22:10.882193   49076 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:22:10.882262   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:22:10.882445   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:22:10.882538   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:22:10.882727   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:22:10.882823   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:22:10.883020   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:22:10.883124   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:22:10.883296   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:22:10.883377   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:22:10.883571   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:22:10.883583   49076 kubeadm.go:309] 
	I0319 20:22:10.883626   49076 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:22:10.883661   49076 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:22:10.883667   49076 kubeadm.go:309] 
	I0319 20:22:10.883700   49076 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:22:10.883730   49076 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:22:10.883833   49076 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:22:10.883848   49076 kubeadm.go:309] 
	I0319 20:22:10.883971   49076 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:22:10.884012   49076 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:22:10.884058   49076 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:22:10.884067   49076 kubeadm.go:309] 
	I0319 20:22:10.884200   49076 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:22:10.884316   49076 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:22:10.884326   49076 kubeadm.go:309] 
	I0319 20:22:10.884444   49076 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:22:10.884562   49076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:22:10.884659   49076 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:22:10.884773   49076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:22:10.884833   49076 kubeadm.go:309] 
	W0319 20:22:10.884904   49076 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-853797 localhost] and IPs [192.168.50.116 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-853797 localhost] and IPs [192.168.50.116 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-853797 localhost] and IPs [192.168.50.116 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-853797 localhost] and IPs [192.168.50.116 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:22:10.884958   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:22:13.568928   49076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.683944905s)
	I0319 20:22:13.569009   49076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:22:13.588526   49076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:22:13.600446   49076 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:22:13.600469   49076 kubeadm.go:156] found existing configuration files:
	
	I0319 20:22:13.600521   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:22:13.613502   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:22:13.613576   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:22:13.627020   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:22:13.641973   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:22:13.642041   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:22:13.655741   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:22:13.667457   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:22:13.667511   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:22:13.680007   49076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:22:13.690985   49076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:22:13.691042   49076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:22:13.701930   49076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:22:13.791573   49076 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:22:13.791622   49076 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:22:13.971210   49076 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:22:13.971328   49076 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:22:13.971434   49076 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:22:14.182692   49076 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:22:14.184909   49076 out.go:204]   - Generating certificates and keys ...
	I0319 20:22:14.185013   49076 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:22:14.185131   49076 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:22:14.185261   49076 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:22:14.185338   49076 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:22:14.185429   49076 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:22:14.185503   49076 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:22:14.185825   49076 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:22:14.186189   49076 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:22:14.186725   49076 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:22:14.187154   49076 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:22:14.187234   49076 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:22:14.187308   49076 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:22:14.689499   49076 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:22:14.838166   49076 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:22:15.009211   49076 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:22:15.264764   49076 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:22:15.293090   49076 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:22:15.293645   49076 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:22:15.293822   49076 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:22:15.487873   49076 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:22:15.489631   49076 out.go:204]   - Booting up control plane ...
	I0319 20:22:15.489742   49076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:22:15.503722   49076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:22:15.505633   49076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:22:15.506725   49076 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:22:15.510257   49076 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:22:55.512489   49076 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:22:55.512654   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:22:55.512847   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:23:00.513576   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:23:00.513873   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:23:10.514686   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:23:10.514964   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:23:30.516132   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:23:30.516398   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:24:10.516611   49076 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:24:10.516855   49076 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:24:10.516869   49076 kubeadm.go:309] 
	I0319 20:24:10.516920   49076 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:24:10.516979   49076 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:24:10.516987   49076 kubeadm.go:309] 
	I0319 20:24:10.517030   49076 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:24:10.517064   49076 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:24:10.517192   49076 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:24:10.517201   49076 kubeadm.go:309] 
	I0319 20:24:10.517350   49076 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:24:10.517396   49076 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:24:10.517449   49076 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:24:10.517457   49076 kubeadm.go:309] 
	I0319 20:24:10.517585   49076 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:24:10.517692   49076 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:24:10.517699   49076 kubeadm.go:309] 
	I0319 20:24:10.517838   49076 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:24:10.517943   49076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:24:10.518039   49076 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:24:10.518134   49076 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:24:10.518141   49076 kubeadm.go:309] 
	I0319 20:24:10.519330   49076 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:24:10.519450   49076 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:24:10.519564   49076 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:24:10.519648   49076 kubeadm.go:393] duration metric: took 3m58.704213192s to StartCluster
	I0319 20:24:10.519717   49076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:24:10.519783   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:24:10.592852   49076 cri.go:89] found id: ""
	I0319 20:24:10.592886   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.592900   49076 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:24:10.592909   49076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:24:10.592980   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:24:10.653404   49076 cri.go:89] found id: ""
	I0319 20:24:10.653432   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.653443   49076 logs.go:278] No container was found matching "etcd"
	I0319 20:24:10.653450   49076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:24:10.653506   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:24:10.712742   49076 cri.go:89] found id: ""
	I0319 20:24:10.712765   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.712772   49076 logs.go:278] No container was found matching "coredns"
	I0319 20:24:10.712777   49076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:24:10.712832   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:24:10.776231   49076 cri.go:89] found id: ""
	I0319 20:24:10.776273   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.776285   49076 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:24:10.776293   49076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:24:10.776345   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:24:10.821752   49076 cri.go:89] found id: ""
	I0319 20:24:10.821785   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.821795   49076 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:24:10.821803   49076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:24:10.821861   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:24:10.868382   49076 cri.go:89] found id: ""
	I0319 20:24:10.868410   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.868420   49076 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:24:10.868428   49076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:24:10.868493   49076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:24:10.921221   49076 cri.go:89] found id: ""
	I0319 20:24:10.921250   49076 logs.go:276] 0 containers: []
	W0319 20:24:10.921261   49076 logs.go:278] No container was found matching "kindnet"
	I0319 20:24:10.921272   49076 logs.go:123] Gathering logs for kubelet ...
	I0319 20:24:10.921289   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:24:10.996486   49076 logs.go:123] Gathering logs for dmesg ...
	I0319 20:24:10.996525   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:24:11.018389   49076 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:24:11.018420   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:24:11.190883   49076 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:24:11.190908   49076 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:24:11.190924   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:24:11.309574   49076 logs.go:123] Gathering logs for container status ...
	I0319 20:24:11.309632   49076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:24:11.372298   49076 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:24:11.372350   49076 out.go:239] * 
	* 
	W0319 20:24:11.372412   49076 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:24:11.372444   49076 out.go:239] * 
	* 
	W0319 20:24:11.373621   49076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:24:11.377006   49076 out.go:177] 
	W0319 20:24:11.378323   49076 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:24:11.378376   49076 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:24:11.378403   49076 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:24:11.379864   49076 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-853797
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-853797: (1.655834259s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-853797 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-853797 status --format={{.Host}}: exit status 7 (84.421073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.990606846s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-853797 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (92.681725ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-853797] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-853797
	    minikube start -p kubernetes-upgrade-853797 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8537972 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-853797 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-853797 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.586583534s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-19 20:26:37.911885985 +0000 UTC m=+4922.032505749
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-853797 -n kubernetes-upgrade-853797
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-853797 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-853797 logs -n 25: (1.888364512s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-378078 sudo cat              | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat              | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                  | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                  | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                  | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo find             | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo crio             | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-378078                       | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	| start   | -p force-systemd-env-587385            | force-systemd-env-587385  | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-746219                        | pause-746219              | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-910871 ssh cat      | force-systemd-flag-910871 | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-910871           | force-systemd-flag-910871 | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	| start   | -p cert-options-346618                 | cert-options-346618       | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-853797           | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p kubernetes-upgrade-853797           | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:25 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0    |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-587385            | force-systemd-env-587385  | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p cert-expiration-428153              | cert-expiration-428153    | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:25 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p pause-746219                        | pause-746219              | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p old-k8s-version-159022              | old-k8s-version-159022    | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| ssh     | cert-options-346618 ssh                | cert-options-346618       | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-346618 -- sudo         | cert-options-346618       | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-346618                 | cert-options-346618       | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p no-preload-414130 --memory=2200     | no-preload-414130         | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797           | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797           | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0    |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:25:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:25:15.375913   56472 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:25:15.376162   56472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:25:15.376170   56472 out.go:304] Setting ErrFile to fd 2...
	I0319 20:25:15.376175   56472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:25:15.376376   56472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:25:15.376942   56472 out.go:298] Setting JSON to false
	I0319 20:25:15.377848   56472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7613,"bootTime":1710872302,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:25:15.377915   56472 start.go:139] virtualization: kvm guest
	I0319 20:25:15.380147   56472 out.go:177] * [kubernetes-upgrade-853797] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:25:15.381878   56472 notify.go:220] Checking for updates...
	I0319 20:25:15.381911   56472 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:25:15.383450   56472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:25:15.384746   56472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:25:15.386085   56472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:15.387319   56472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:25:15.388516   56472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:25:15.390025   56472 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:25:15.390398   56472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:15.390457   56472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:15.405666   56472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35807
	I0319 20:25:15.406018   56472 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:15.406538   56472 main.go:141] libmachine: Using API Version  1
	I0319 20:25:15.406557   56472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:15.406870   56472 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:15.407100   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:25:15.407338   56472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:25:15.407677   56472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:15.407716   56472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:15.422842   56472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0319 20:25:15.423250   56472 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:15.423723   56472 main.go:141] libmachine: Using API Version  1
	I0319 20:25:15.423749   56472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:15.424063   56472 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:15.424245   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:25:15.460053   56472 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:25:15.461119   56472 start.go:297] selected driver: kvm2
	I0319 20:25:15.461132   56472 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:25:15.461265   56472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:25:15.461934   56472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:25:15.461998   56472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:25:15.476429   56472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:25:15.476934   56472 cni.go:84] Creating CNI manager for ""
	I0319 20:25:15.476955   56472 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:25:15.477004   56472 start.go:340] cluster config:
	{Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-853797 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:25:15.477131   56472 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:25:15.478963   56472 out.go:177] * Starting "kubernetes-upgrade-853797" primary control-plane node in "kubernetes-upgrade-853797" cluster
	I0319 20:25:14.921181   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:14.921687   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | unable to find current IP address of domain cert-expiration-428153 in network mk-cert-expiration-428153
	I0319 20:25:14.921725   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | I0319 20:25:14.921628   56227 retry.go:31] will retry after 3.523946469s: waiting for machine to come up
	I0319 20:25:18.448403   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.449042   55554 main.go:141] libmachine: (cert-expiration-428153) Found IP for machine: 192.168.39.211
	I0319 20:25:18.449056   55554 main.go:141] libmachine: (cert-expiration-428153) Reserving static IP address...
	I0319 20:25:18.449069   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has current primary IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.449469   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | unable to find host DHCP lease matching {name: "cert-expiration-428153", mac: "52:54:00:19:c4:1a", ip: "192.168.39.211"} in network mk-cert-expiration-428153
	I0319 20:25:18.521555   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | Getting to WaitForSSH function...
	I0319 20:25:18.521578   55554 main.go:141] libmachine: (cert-expiration-428153) Reserved static IP address: 192.168.39.211
	I0319 20:25:18.521591   55554 main.go:141] libmachine: (cert-expiration-428153) Waiting for SSH to be available...
	I0319 20:25:18.523765   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.524218   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:18.524247   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.524371   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | Using SSH client type: external
	I0319 20:25:18.524396   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa (-rw-------)
	I0319 20:25:18.524413   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:25:18.524422   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | About to run SSH command:
	I0319 20:25:18.524430   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | exit 0
	I0319 20:25:18.660448   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | SSH cmd err, output: <nil>: 
	I0319 20:25:18.660681   55554 main.go:141] libmachine: (cert-expiration-428153) KVM machine creation complete!
	I0319 20:25:18.661014   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetConfigRaw
	I0319 20:25:18.661580   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:18.661743   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:18.661878   55554 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:25:18.661896   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetState
	I0319 20:25:18.663210   55554 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:25:18.663217   55554 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:25:18.663221   55554 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:25:18.663226   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:18.665496   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.665835   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:18.665853   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.665980   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:18.666135   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:18.666261   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:18.666365   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:18.666505   55554 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:18.666750   55554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0319 20:25:18.666758   55554 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:25:20.013507   55982 start.go:364] duration metric: took 33.603586686s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:25:20.013579   55982 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:25:20.013715   55982 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 20:25:15.480112   56472 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:25:15.480140   56472 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0319 20:25:15.480146   56472 cache.go:56] Caching tarball of preloaded images
	I0319 20:25:15.480210   56472 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:25:15.480221   56472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on crio
	I0319 20:25:15.480332   56472 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/config.json ...
	I0319 20:25:15.480502   56472 start.go:360] acquireMachinesLock for kubernetes-upgrade-853797: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:25:20.016016   55982 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 20:25:20.016216   55982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:20.016272   55982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:20.032911   55982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0319 20:25:20.033334   55982 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:20.033953   55982 main.go:141] libmachine: Using API Version  1
	I0319 20:25:20.033981   55982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:20.034279   55982 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:20.034454   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:20.034596   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:20.034732   55982 start.go:159] libmachine.API.Create for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:25:20.034761   55982 client.go:168] LocalClient.Create starting
	I0319 20:25:20.034790   55982 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 20:25:20.034822   55982 main.go:141] libmachine: Decoding PEM data...
	I0319 20:25:20.034836   55982 main.go:141] libmachine: Parsing certificate...
	I0319 20:25:20.034885   55982 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 20:25:20.034903   55982 main.go:141] libmachine: Decoding PEM data...
	I0319 20:25:20.034914   55982 main.go:141] libmachine: Parsing certificate...
	I0319 20:25:20.034930   55982 main.go:141] libmachine: Running pre-create checks...
	I0319 20:25:20.034939   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .PreCreateCheck
	I0319 20:25:20.035232   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:25:20.035646   55982 main.go:141] libmachine: Creating machine...
	I0319 20:25:20.035664   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .Create
	I0319 20:25:20.035810   55982 main.go:141] libmachine: (old-k8s-version-159022) Creating KVM machine...
	I0319 20:25:20.036933   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found existing default KVM network
	I0319 20:25:20.037991   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.037847   56546 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:20:18} reservation:<nil>}
	I0319 20:25:20.038687   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.038572   56546 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:58:8d} reservation:<nil>}
	I0319 20:25:20.039469   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.039386   56546 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ca4e0}
	I0319 20:25:20.039509   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | created network xml: 
	I0319 20:25:20.039532   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | <network>
	I0319 20:25:20.039545   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   <name>mk-old-k8s-version-159022</name>
	I0319 20:25:20.039562   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   <dns enable='no'/>
	I0319 20:25:20.039575   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   
	I0319 20:25:20.039585   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0319 20:25:20.039598   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |     <dhcp>
	I0319 20:25:20.039616   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0319 20:25:20.039626   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |     </dhcp>
	I0319 20:25:20.039633   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   </ip>
	I0319 20:25:20.039641   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   
	I0319 20:25:20.039648   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | </network>
	I0319 20:25:20.039658   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | 
	I0319 20:25:20.044896   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | trying to create private KVM network mk-old-k8s-version-159022 192.168.61.0/24...
	I0319 20:25:20.113584   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | private KVM network mk-old-k8s-version-159022 192.168.61.0/24 created
	I0319 20:25:20.113614   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022 ...
	I0319 20:25:20.113628   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.113550   56546 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:20.113655   55982 main.go:141] libmachine: (old-k8s-version-159022) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 20:25:20.113670   55982 main.go:141] libmachine: (old-k8s-version-159022) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 20:25:20.346103   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.345974   56546 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa...
	I0319 20:25:20.422890   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.422761   56546 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/old-k8s-version-159022.rawdisk...
	I0319 20:25:20.422926   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Writing magic tar header
	I0319 20:25:20.422944   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Writing SSH key tar header
	I0319 20:25:20.423104   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.422988   56546 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022 ...
	I0319 20:25:20.423169   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022 (perms=drwx------)
	I0319 20:25:20.423184   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022
	I0319 20:25:20.423201   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 20:25:20.423217   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 20:25:20.423230   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 20:25:20.423236   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 20:25:20.423247   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 20:25:20.423260   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:20.423274   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 20:25:20.423291   55982 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:25:20.423307   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 20:25:20.423316   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 20:25:20.423322   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins
	I0319 20:25:20.423333   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home
	I0319 20:25:20.423345   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Skipping /home - not owner
	I0319 20:25:20.424469   55982 main.go:141] libmachine: (old-k8s-version-159022) define libvirt domain using xml: 
	I0319 20:25:20.424484   55982 main.go:141] libmachine: (old-k8s-version-159022) <domain type='kvm'>
	I0319 20:25:20.424492   55982 main.go:141] libmachine: (old-k8s-version-159022)   <name>old-k8s-version-159022</name>
	I0319 20:25:20.424500   55982 main.go:141] libmachine: (old-k8s-version-159022)   <memory unit='MiB'>2200</memory>
	I0319 20:25:20.424505   55982 main.go:141] libmachine: (old-k8s-version-159022)   <vcpu>2</vcpu>
	I0319 20:25:20.424510   55982 main.go:141] libmachine: (old-k8s-version-159022)   <features>
	I0319 20:25:20.424517   55982 main.go:141] libmachine: (old-k8s-version-159022)     <acpi/>
	I0319 20:25:20.424536   55982 main.go:141] libmachine: (old-k8s-version-159022)     <apic/>
	I0319 20:25:20.424549   55982 main.go:141] libmachine: (old-k8s-version-159022)     <pae/>
	I0319 20:25:20.424569   55982 main.go:141] libmachine: (old-k8s-version-159022)     
	I0319 20:25:20.424581   55982 main.go:141] libmachine: (old-k8s-version-159022)   </features>
	I0319 20:25:20.424596   55982 main.go:141] libmachine: (old-k8s-version-159022)   <cpu mode='host-passthrough'>
	I0319 20:25:20.424607   55982 main.go:141] libmachine: (old-k8s-version-159022)   
	I0319 20:25:20.424614   55982 main.go:141] libmachine: (old-k8s-version-159022)   </cpu>
	I0319 20:25:20.424625   55982 main.go:141] libmachine: (old-k8s-version-159022)   <os>
	I0319 20:25:20.424635   55982 main.go:141] libmachine: (old-k8s-version-159022)     <type>hvm</type>
	I0319 20:25:20.424661   55982 main.go:141] libmachine: (old-k8s-version-159022)     <boot dev='cdrom'/>
	I0319 20:25:20.424684   55982 main.go:141] libmachine: (old-k8s-version-159022)     <boot dev='hd'/>
	I0319 20:25:20.424706   55982 main.go:141] libmachine: (old-k8s-version-159022)     <bootmenu enable='no'/>
	I0319 20:25:20.424721   55982 main.go:141] libmachine: (old-k8s-version-159022)   </os>
	I0319 20:25:20.424734   55982 main.go:141] libmachine: (old-k8s-version-159022)   <devices>
	I0319 20:25:20.424742   55982 main.go:141] libmachine: (old-k8s-version-159022)     <disk type='file' device='cdrom'>
	I0319 20:25:20.424759   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/boot2docker.iso'/>
	I0319 20:25:20.424778   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target dev='hdc' bus='scsi'/>
	I0319 20:25:20.424788   55982 main.go:141] libmachine: (old-k8s-version-159022)       <readonly/>
	I0319 20:25:20.424797   55982 main.go:141] libmachine: (old-k8s-version-159022)     </disk>
	I0319 20:25:20.424807   55982 main.go:141] libmachine: (old-k8s-version-159022)     <disk type='file' device='disk'>
	I0319 20:25:20.424824   55982 main.go:141] libmachine: (old-k8s-version-159022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 20:25:20.424842   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/old-k8s-version-159022.rawdisk'/>
	I0319 20:25:20.424853   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target dev='hda' bus='virtio'/>
	I0319 20:25:20.424869   55982 main.go:141] libmachine: (old-k8s-version-159022)     </disk>
	I0319 20:25:20.424881   55982 main.go:141] libmachine: (old-k8s-version-159022)     <interface type='network'>
	I0319 20:25:20.424892   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source network='mk-old-k8s-version-159022'/>
	I0319 20:25:20.424907   55982 main.go:141] libmachine: (old-k8s-version-159022)       <model type='virtio'/>
	I0319 20:25:20.424919   55982 main.go:141] libmachine: (old-k8s-version-159022)     </interface>
	I0319 20:25:20.424926   55982 main.go:141] libmachine: (old-k8s-version-159022)     <interface type='network'>
	I0319 20:25:20.424947   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source network='default'/>
	I0319 20:25:20.424967   55982 main.go:141] libmachine: (old-k8s-version-159022)       <model type='virtio'/>
	I0319 20:25:20.424977   55982 main.go:141] libmachine: (old-k8s-version-159022)     </interface>
	I0319 20:25:20.424984   55982 main.go:141] libmachine: (old-k8s-version-159022)     <serial type='pty'>
	I0319 20:25:20.425000   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target port='0'/>
	I0319 20:25:20.425011   55982 main.go:141] libmachine: (old-k8s-version-159022)     </serial>
	I0319 20:25:20.425021   55982 main.go:141] libmachine: (old-k8s-version-159022)     <console type='pty'>
	I0319 20:25:20.425033   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target type='serial' port='0'/>
	I0319 20:25:20.425044   55982 main.go:141] libmachine: (old-k8s-version-159022)     </console>
	I0319 20:25:20.425054   55982 main.go:141] libmachine: (old-k8s-version-159022)     <rng model='virtio'>
	I0319 20:25:20.425074   55982 main.go:141] libmachine: (old-k8s-version-159022)       <backend model='random'>/dev/random</backend>
	I0319 20:25:20.425088   55982 main.go:141] libmachine: (old-k8s-version-159022)     </rng>
	I0319 20:25:20.425096   55982 main.go:141] libmachine: (old-k8s-version-159022)     
	I0319 20:25:20.425114   55982 main.go:141] libmachine: (old-k8s-version-159022)     
	I0319 20:25:20.425127   55982 main.go:141] libmachine: (old-k8s-version-159022)   </devices>
	I0319 20:25:20.425137   55982 main.go:141] libmachine: (old-k8s-version-159022) </domain>
	I0319 20:25:20.425146   55982 main.go:141] libmachine: (old-k8s-version-159022) 
	I0319 20:25:20.429121   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:50:b9:64 in network default
	I0319 20:25:20.429776   55982 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:25:20.429803   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:20.430475   55982 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:25:20.430840   55982 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:25:20.431352   55982 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:25:20.432067   55982 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:25:18.779874   55554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:25:18.779886   55554 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:25:18.779892   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:18.782615   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.782986   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:18.783010   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.783190   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:18.783380   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:18.783544   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:18.783669   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:18.783834   55554 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:18.783999   55554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0319 20:25:18.784004   55554 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:25:18.897595   55554 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:25:18.897643   55554 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:25:18.897665   55554 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:25:18.897673   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetMachineName
	I0319 20:25:18.897932   55554 buildroot.go:166] provisioning hostname "cert-expiration-428153"
	I0319 20:25:18.897956   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetMachineName
	I0319 20:25:18.898140   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:18.900813   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.901155   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:18.901191   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:18.901288   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:18.901488   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:18.901664   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:18.901756   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:18.901937   55554 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:18.902094   55554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0319 20:25:18.902100   55554 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-428153 && echo "cert-expiration-428153" | sudo tee /etc/hostname
	I0319 20:25:19.033027   55554 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-428153
	
	I0319 20:25:19.033050   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:19.035974   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.036344   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.036376   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.036530   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:19.036727   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.036889   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.037016   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:19.037172   55554 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:19.037350   55554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0319 20:25:19.037361   55554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-428153' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-428153/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-428153' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:25:19.162530   55554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:25:19.162557   55554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:25:19.162584   55554 buildroot.go:174] setting up certificates
	I0319 20:25:19.162595   55554 provision.go:84] configureAuth start
	I0319 20:25:19.162607   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetMachineName
	I0319 20:25:19.162895   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetIP
	I0319 20:25:19.165486   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.165830   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.165848   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.166073   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:19.168532   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.168904   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.168926   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.169046   55554 provision.go:143] copyHostCerts
	I0319 20:25:19.169110   55554 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:25:19.169118   55554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:25:19.169177   55554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:25:19.169288   55554 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:25:19.169295   55554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:25:19.169322   55554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:25:19.169395   55554 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:25:19.169400   55554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:25:19.169424   55554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:25:19.169485   55554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-428153 san=[127.0.0.1 192.168.39.211 cert-expiration-428153 localhost minikube]
	I0319 20:25:19.278903   55554 provision.go:177] copyRemoteCerts
	I0319 20:25:19.278947   55554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:25:19.278967   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:19.281440   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.281788   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.281813   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.281966   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:19.282150   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.282264   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:19.282368   55554 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa Username:docker}
	I0319 20:25:19.371296   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:25:19.403420   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:25:19.430759   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:25:19.460655   55554 provision.go:87] duration metric: took 298.035238ms to configureAuth
	I0319 20:25:19.460669   55554 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:25:19.460809   55554 config.go:182] Loaded profile config "cert-expiration-428153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:25:19.460867   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:19.463484   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.463822   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.463840   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.464035   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:19.464217   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.464366   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.464489   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:19.464605   55554 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:19.464774   55554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0319 20:25:19.464785   55554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:25:19.750196   55554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:25:19.750213   55554 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:25:19.750220   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetURL
	I0319 20:25:19.751477   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | Using libvirt version 6000000
	I0319 20:25:19.753931   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.754550   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.754573   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.754823   55554 main.go:141] libmachine: Docker is up and running!
	I0319 20:25:19.754830   55554 main.go:141] libmachine: Reticulating splines...
	I0319 20:25:19.754836   55554 client.go:171] duration metric: took 23.284303307s to LocalClient.Create
	I0319 20:25:19.754858   55554 start.go:167] duration metric: took 23.284365238s to libmachine.API.Create "cert-expiration-428153"
	I0319 20:25:19.754865   55554 start.go:293] postStartSetup for "cert-expiration-428153" (driver="kvm2")
	I0319 20:25:19.754876   55554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:25:19.754895   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:19.755135   55554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:25:19.755152   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:19.757375   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.757671   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.757690   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.757844   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:19.758025   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.758173   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:19.758290   55554 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa Username:docker}
	I0319 20:25:19.847259   55554 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:25:19.852328   55554 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:25:19.852343   55554 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:25:19.852436   55554 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:25:19.852551   55554 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:25:19.852664   55554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:25:19.863097   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:25:19.890212   55554 start.go:296] duration metric: took 135.337452ms for postStartSetup
	I0319 20:25:19.890253   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetConfigRaw
	I0319 20:25:19.890822   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetIP
	I0319 20:25:19.893612   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.893956   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.893973   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.894232   55554 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/config.json ...
	I0319 20:25:19.894453   55554 start.go:128] duration metric: took 23.444658708s to createHost
	I0319 20:25:19.894473   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:19.896758   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.897049   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:19.897065   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:19.897173   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:19.897353   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.897501   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:19.897675   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:19.897827   55554 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:19.897979   55554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0319 20:25:19.897989   55554 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:25:20.013394   55554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879919.994583863
	
	I0319 20:25:20.013406   55554 fix.go:216] guest clock: 1710879919.994583863
	I0319 20:25:20.013414   55554 fix.go:229] Guest: 2024-03-19 20:25:19.994583863 +0000 UTC Remote: 2024-03-19 20:25:19.894459662 +0000 UTC m=+56.215735116 (delta=100.124201ms)
	I0319 20:25:20.013430   55554 fix.go:200] guest clock delta is within tolerance: 100.124201ms
	I0319 20:25:20.013434   55554 start.go:83] releasing machines lock for "cert-expiration-428153", held for 23.563782346s
	I0319 20:25:20.013461   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:20.013742   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetIP
	I0319 20:25:20.016662   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:20.016983   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:20.017003   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:20.017195   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:20.017640   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:20.017828   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:20.017908   55554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:25:20.017941   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:20.018079   55554 ssh_runner.go:195] Run: cat /version.json
	I0319 20:25:20.018099   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:20.020891   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:20.020910   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:20.021241   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:20.021265   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:20.021323   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:20.021342   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:20.021429   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:20.021609   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:20.021696   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:20.021757   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:20.021855   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:20.021901   55554 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa Username:docker}
	I0319 20:25:20.021970   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:20.022109   55554 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa Username:docker}
	I0319 20:25:20.107014   55554 ssh_runner.go:195] Run: systemctl --version
	I0319 20:25:20.134876   55554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:25:20.313027   55554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:25:20.319664   55554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:25:20.319716   55554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:25:20.339920   55554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:25:20.339947   55554 start.go:494] detecting cgroup driver to use...
	I0319 20:25:20.340010   55554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:25:20.359056   55554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:25:20.376597   55554 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:25:20.376646   55554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:25:20.393822   55554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:25:20.411295   55554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:25:20.543361   55554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:25:20.728695   55554 docker.go:233] disabling docker service ...
	I0319 20:25:20.728736   55554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:25:20.748578   55554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:25:20.763834   55554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:25:20.901345   55554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:25:21.063350   55554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:25:21.080187   55554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:25:21.101185   55554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:25:21.101231   55554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.112615   55554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:25:21.112671   55554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.123771   55554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.135094   55554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.146158   55554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:25:21.157883   55554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.169445   55554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.192402   55554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:21.209436   55554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:25:21.223883   55554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:25:21.223921   55554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:25:21.240109   55554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:25:21.250398   55554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:21.379849   55554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:25:21.543306   55554 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:25:21.543366   55554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:25:21.550030   55554 start.go:562] Will wait 60s for crictl version
	I0319 20:25:21.550083   55554 ssh_runner.go:195] Run: which crictl
	I0319 20:25:21.554630   55554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:25:21.598201   55554 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:25:21.598273   55554 ssh_runner.go:195] Run: crio --version
	I0319 20:25:21.634138   55554 ssh_runner.go:195] Run: crio --version
	I0319 20:25:21.671713   55554 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:25:21.673167   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetIP
	I0319 20:25:21.676117   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:21.676592   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:21.676617   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:21.676813   55554 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:25:21.681605   55554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:25:21.698133   55554 kubeadm.go:877] updating cluster {Name:cert-expiration-428153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:cert-expiration-428153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:25:21.698241   55554 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:25:21.698279   55554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:25:21.750779   55554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:25:21.750851   55554 ssh_runner.go:195] Run: which lz4
	I0319 20:25:21.757255   55554 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:25:21.763533   55554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:25:21.763558   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:25:23.542283   55554 crio.go:462] duration metric: took 1.785069874s to copy over tarball
	I0319 20:25:23.542345   55554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:25:21.692285   55982 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:25:21.693038   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:21.693483   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:21.693531   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:21.693478   56546 retry.go:31] will retry after 300.974755ms: waiting for machine to come up
	I0319 20:25:21.996216   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:21.996815   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:21.996846   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:21.996757   56546 retry.go:31] will retry after 378.350693ms: waiting for machine to come up
	I0319 20:25:22.377082   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:22.378062   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:22.378093   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:22.378011   56546 retry.go:31] will retry after 337.090678ms: waiting for machine to come up
	I0319 20:25:22.716618   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:22.717084   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:22.717108   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:22.717034   56546 retry.go:31] will retry after 448.487874ms: waiting for machine to come up
	I0319 20:25:23.167271   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:23.167958   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:23.167989   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:23.167909   56546 retry.go:31] will retry after 738.736662ms: waiting for machine to come up
	I0319 20:25:23.907682   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:23.908392   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:23.908420   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:23.908359   56546 retry.go:31] will retry after 823.841957ms: waiting for machine to come up
	I0319 20:25:24.734060   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:24.734560   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:24.734588   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:24.734505   56546 retry.go:31] will retry after 1.015139108s: waiting for machine to come up
	I0319 20:25:25.751162   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:25.751729   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:25.751760   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:25.751675   56546 retry.go:31] will retry after 901.716648ms: waiting for machine to come up
	I0319 20:25:26.350118   55554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.807748579s)
	I0319 20:25:26.350143   55554 crio.go:469] duration metric: took 2.807843054s to extract the tarball
	I0319 20:25:26.350149   55554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:25:26.394232   55554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:25:26.481423   55554 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:25:26.481437   55554 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:25:26.481446   55554 kubeadm.go:928] updating node { 192.168.39.211 8443 v1.29.3 crio true true} ...
	I0319 20:25:26.481567   55554 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-428153 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:cert-expiration-428153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:25:26.481644   55554 ssh_runner.go:195] Run: crio config
	I0319 20:25:26.541000   55554 cni.go:84] Creating CNI manager for ""
	I0319 20:25:26.541016   55554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:25:26.541030   55554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:25:26.541054   55554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-428153 NodeName:cert-expiration-428153 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:25:26.541222   55554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-428153"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:25:26.541298   55554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:25:26.554576   55554 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:25:26.554641   55554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:25:26.566954   55554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0319 20:25:26.591815   55554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:25:26.614244   55554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0319 20:25:26.632927   55554 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0319 20:25:26.637329   55554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:25:26.651916   55554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:26.796174   55554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:25:26.821314   55554 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153 for IP: 192.168.39.211
	I0319 20:25:26.821325   55554 certs.go:194] generating shared ca certs ...
	I0319 20:25:26.821338   55554 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:26.821499   55554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:25:26.821528   55554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:25:26.821533   55554 certs.go:256] generating profile certs ...
	I0319 20:25:26.821585   55554 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/client.key
	I0319 20:25:26.821593   55554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/client.crt with IP's: []
	I0319 20:25:26.888097   55554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/client.crt ...
	I0319 20:25:26.888120   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/client.crt: {Name:mk8bbaefb87744333d37424a14ec951b992ebdb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:26.888361   55554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/client.key ...
	I0319 20:25:26.888384   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/client.key: {Name:mk75b2854b005e0efefb6c328e283a1c8b424817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:26.888511   55554 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.key.7c6ca58c
	I0319 20:25:26.888527   55554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.crt.7c6ca58c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I0319 20:25:27.031821   55554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.crt.7c6ca58c ...
	I0319 20:25:27.031836   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.crt.7c6ca58c: {Name:mkb0f70fab6b564245c1018d48e38a2863f501f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:27.031985   55554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.key.7c6ca58c ...
	I0319 20:25:27.031992   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.key.7c6ca58c: {Name:mkf6ef049714a9ce904c8fd483dbd18b5a705fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:27.032066   55554 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.crt.7c6ca58c -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.crt
	I0319 20:25:27.032142   55554 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.key.7c6ca58c -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.key
	I0319 20:25:27.032191   55554 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.key
	I0319 20:25:27.032201   55554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.crt with IP's: []
	I0319 20:25:27.216082   55554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.crt ...
	I0319 20:25:27.216099   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.crt: {Name:mkbac9a63776d8ce3b6a0b6cc910c29c14f6705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:27.216246   55554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.key ...
	I0319 20:25:27.216253   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.key: {Name:mk6ba9009b21185dc0b9175dede1901444383d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:27.216453   55554 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:25:27.216488   55554 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:25:27.216496   55554 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:25:27.216514   55554 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:25:27.216537   55554 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:25:27.216562   55554 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:25:27.216597   55554 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:25:27.217291   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:25:27.247635   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:25:27.276435   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:25:27.308050   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:25:27.336883   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:25:27.392541   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:25:27.427752   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:25:27.457224   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 20:25:27.488058   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:25:27.516716   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:25:27.544288   55554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:25:27.574315   55554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:25:27.593193   55554 ssh_runner.go:195] Run: openssl version
	I0319 20:25:27.599887   55554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:25:27.611830   55554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:25:27.617273   55554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:25:27.617321   55554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:25:27.624000   55554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:25:27.639331   55554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:25:27.661841   55554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:27.668650   55554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:27.668709   55554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:27.677872   55554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:25:27.700491   55554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:25:27.719341   55554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:25:27.724697   55554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:25:27.724741   55554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:25:27.731318   55554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:25:27.743338   55554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:25:27.748242   55554 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 20:25:27.748300   55554 kubeadm.go:391] StartCluster: {Name:cert-expiration-428153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:cert-expiration-428153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:25:27.748368   55554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:25:27.748425   55554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:25:27.789934   55554 cri.go:89] found id: ""
	I0319 20:25:27.789992   55554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 20:25:27.802015   55554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:25:27.812632   55554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:25:27.822877   55554 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:25:27.822894   55554 kubeadm.go:156] found existing configuration files:
	
	I0319 20:25:27.822938   55554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:25:27.832637   55554 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:25:27.832698   55554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:25:27.842882   55554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:25:27.852917   55554 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:25:27.852960   55554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:25:27.863532   55554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:25:27.873639   55554 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:25:27.873709   55554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:25:27.883997   55554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:25:27.893759   55554 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:25:27.893797   55554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:25:27.904023   55554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:25:28.021335   55554 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:25:28.021434   55554 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:25:28.184999   55554 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:25:28.185168   55554 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:25:28.185307   55554 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:25:28.466546   55554 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:25:28.655065   55554 out.go:204]   - Generating certificates and keys ...
	I0319 20:25:28.655278   55554 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:25:28.655365   55554 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:25:28.778355   55554 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 20:25:28.889122   55554 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 20:25:29.050398   55554 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 20:25:29.337978   55554 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 20:25:29.466777   55554 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 20:25:29.467179   55554 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-428153 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0319 20:25:29.655132   55554 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 20:25:29.655515   55554 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-428153 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0319 20:25:29.802287   55554 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 20:25:30.062151   55554 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 20:25:30.253777   55554 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 20:25:30.253892   55554 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:25:30.443917   55554 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:25:30.658923   55554 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:25:30.767474   55554 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:25:31.060051   55554 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:25:31.129746   55554 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:25:31.130560   55554 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:25:31.133355   55554 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:25:26.654593   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:26.655089   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:26.655166   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:26.655062   56546 retry.go:31] will retry after 1.819645561s: waiting for machine to come up
	I0319 20:25:28.475818   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:28.476348   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:28.476378   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:28.476308   56546 retry.go:31] will retry after 1.820594289s: waiting for machine to come up
	I0319 20:25:30.298258   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:30.298796   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:30.298831   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:30.298734   56546 retry.go:31] will retry after 2.616805696s: waiting for machine to come up
	I0319 20:25:31.135282   55554 out.go:204]   - Booting up control plane ...
	I0319 20:25:31.135395   55554 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:25:31.135507   55554 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:25:31.135610   55554 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:25:31.153523   55554 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:25:31.153619   55554 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:25:31.153659   55554 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:25:31.284442   55554 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:25:32.918305   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:32.918846   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:32.918877   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:32.918787   56546 retry.go:31] will retry after 3.429736925s: waiting for machine to come up
	I0319 20:25:36.789637   55554 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503750 seconds
	I0319 20:25:36.808913   55554 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:25:36.828766   55554 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:25:37.366591   55554 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:25:37.366837   55554 kubeadm.go:309] [mark-control-plane] Marking the node cert-expiration-428153 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:25:37.881633   55554 kubeadm.go:309] [bootstrap-token] Using token: cx73km.ito39kzllats3fbq
	I0319 20:25:37.883150   55554 out.go:204]   - Configuring RBAC rules ...
	I0319 20:25:37.883269   55554 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:25:37.889630   55554 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:25:37.897440   55554 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:25:37.910292   55554 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:25:37.914010   55554 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:25:37.918102   55554 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:25:37.933189   55554 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:25:38.188815   55554 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:25:38.300894   55554 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:25:38.301863   55554 kubeadm.go:309] 
	I0319 20:25:38.301926   55554 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:25:38.301932   55554 kubeadm.go:309] 
	I0319 20:25:38.302006   55554 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:25:38.302010   55554 kubeadm.go:309] 
	I0319 20:25:38.302032   55554 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:25:38.302099   55554 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:25:38.302149   55554 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:25:38.302154   55554 kubeadm.go:309] 
	I0319 20:25:38.302240   55554 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:25:38.302244   55554 kubeadm.go:309] 
	I0319 20:25:38.302308   55554 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:25:38.302314   55554 kubeadm.go:309] 
	I0319 20:25:38.302370   55554 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:25:38.302472   55554 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:25:38.302546   55554 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:25:38.302569   55554 kubeadm.go:309] 
	I0319 20:25:38.302635   55554 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:25:38.302735   55554 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:25:38.302738   55554 kubeadm.go:309] 
	I0319 20:25:38.302845   55554 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cx73km.ito39kzllats3fbq \
	I0319 20:25:38.302956   55554 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:25:38.302978   55554 kubeadm.go:309] 	--control-plane 
	I0319 20:25:38.302996   55554 kubeadm.go:309] 
	I0319 20:25:38.303114   55554 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:25:38.303119   55554 kubeadm.go:309] 
	I0319 20:25:38.303231   55554 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cx73km.ito39kzllats3fbq \
	I0319 20:25:38.303378   55554 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:25:38.305106   55554 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:25:38.305121   55554 cni.go:84] Creating CNI manager for ""
	I0319 20:25:38.305127   55554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:25:38.307017   55554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:25:38.308414   55554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:25:38.345443   55554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:25:38.452448   55554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:25:38.452507   55554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:25:38.452512   55554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-428153 minikube.k8s.io/updated_at=2024_03_19T20_25_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=cert-expiration-428153 minikube.k8s.io/primary=true
	I0319 20:25:38.489095   55554 ops.go:34] apiserver oom_adj: -16
	I0319 20:25:38.685319   55554 kubeadm.go:1107] duration metric: took 232.86735ms to wait for elevateKubeSystemPrivileges
	W0319 20:25:38.685361   55554 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:25:38.685367   55554 kubeadm.go:393] duration metric: took 10.937071088s to StartCluster
	I0319 20:25:38.685380   55554 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:38.685439   55554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:25:38.686377   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:38.686594   55554 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:25:38.688428   55554 out.go:177] * Verifying Kubernetes components...
	I0319 20:25:38.686622   55554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 20:25:38.686634   55554 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:25:38.686770   55554 config.go:182] Loaded profile config "cert-expiration-428153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:25:38.690020   55554 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-428153"
	I0319 20:25:38.690035   55554 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-428153"
	I0319 20:25:38.690051   55554 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-428153"
	I0319 20:25:38.690063   55554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-428153"
	I0319 20:25:38.690088   55554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:38.690094   55554 host.go:66] Checking if "cert-expiration-428153" exists ...
	I0319 20:25:38.690546   55554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:38.690546   55554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:38.690563   55554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:38.690564   55554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:38.705194   55554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0319 20:25:38.705657   55554 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:38.705779   55554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0319 20:25:38.706164   55554 main.go:141] libmachine: Using API Version  1
	I0319 20:25:38.706178   55554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:38.706210   55554 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:38.706580   55554 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:38.706659   55554 main.go:141] libmachine: Using API Version  1
	I0319 20:25:38.706674   55554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:38.706752   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetState
	I0319 20:25:38.707008   55554 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:38.707592   55554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:38.707630   55554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:38.710181   55554 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-428153"
	I0319 20:25:38.710204   55554 host.go:66] Checking if "cert-expiration-428153" exists ...
	I0319 20:25:38.710445   55554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:38.710471   55554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:38.722664   55554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0319 20:25:38.723145   55554 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:38.723693   55554 main.go:141] libmachine: Using API Version  1
	I0319 20:25:38.723713   55554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:38.724102   55554 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:38.724308   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetState
	I0319 20:25:38.724952   55554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0319 20:25:38.725332   55554 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:38.725844   55554 main.go:141] libmachine: Using API Version  1
	I0319 20:25:38.727336   55554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:38.726074   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:38.729093   55554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:38.727701   55554 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:38.730633   55554 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:25:38.730642   55554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:25:38.730655   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:38.731054   55554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:38.731073   55554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:38.734007   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:38.734489   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:38.734510   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:38.734708   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:38.735079   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:38.735261   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:38.735407   55554 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa Username:docker}
	I0319 20:25:38.747074   55554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I0319 20:25:38.747499   55554 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:38.747998   55554 main.go:141] libmachine: Using API Version  1
	I0319 20:25:38.748015   55554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:38.748347   55554 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:38.748597   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetState
	I0319 20:25:38.750266   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .DriverName
	I0319 20:25:38.750583   55554 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:25:38.750593   55554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:25:38.750611   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHHostname
	I0319 20:25:38.753703   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:38.754031   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c4:1a", ip: ""} in network mk-cert-expiration-428153: {Iface:virbr1 ExpiryTime:2024-03-19 21:25:12 +0000 UTC Type:0 Mac:52:54:00:19:c4:1a Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:cert-expiration-428153 Clientid:01:52:54:00:19:c4:1a}
	I0319 20:25:38.754052   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | domain cert-expiration-428153 has defined IP address 192.168.39.211 and MAC address 52:54:00:19:c4:1a in network mk-cert-expiration-428153
	I0319 20:25:38.754156   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHPort
	I0319 20:25:38.754313   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHKeyPath
	I0319 20:25:38.754485   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .GetSSHUsername
	I0319 20:25:38.754626   55554 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-expiration-428153/id_rsa Username:docker}
	I0319 20:25:38.875083   55554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:25:38.875359   55554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 20:25:38.917744   55554 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:25:38.917793   55554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:25:39.026236   55554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:25:39.082218   55554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:25:39.314289   55554 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0319 20:25:39.314354   55554 api_server.go:72] duration metric: took 627.731416ms to wait for apiserver process to appear ...
	I0319 20:25:39.314373   55554 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:25:39.314403   55554 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I0319 20:25:39.319264   55554 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I0319 20:25:39.327778   55554 api_server.go:141] control plane version: v1.29.3
	I0319 20:25:39.327790   55554 api_server.go:131] duration metric: took 13.411848ms to wait for apiserver health ...
	I0319 20:25:39.327796   55554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:25:39.347436   55554 system_pods.go:59] 4 kube-system pods found
	I0319 20:25:39.347448   55554 system_pods.go:61] "etcd-cert-expiration-428153" [85a7d6ea-1e7b-47e8-9543-a3f844fa2683] Pending
	I0319 20:25:39.347452   55554 system_pods.go:61] "kube-apiserver-cert-expiration-428153" [e2640eb9-87b0-449c-812f-66719d6d9ada] Pending
	I0319 20:25:39.347454   55554 system_pods.go:61] "kube-controller-manager-cert-expiration-428153" [a1eb3def-bf4b-48d1-9f48-c41a829c9616] Pending
	I0319 20:25:39.347456   55554 system_pods.go:61] "kube-scheduler-cert-expiration-428153" [9e2d458c-50ad-4ae1-9258-9351c719e037] Pending
	I0319 20:25:39.347460   55554 system_pods.go:74] duration metric: took 19.66062ms to wait for pod list to return data ...
	I0319 20:25:39.347467   55554 kubeadm.go:576] duration metric: took 660.84785ms to wait for: map[apiserver:true system_pods:true]
	I0319 20:25:39.347475   55554 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:25:39.357759   55554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:25:39.357776   55554 node_conditions.go:123] node cpu capacity is 2
	I0319 20:25:39.357784   55554 node_conditions.go:105] duration metric: took 10.305837ms to run NodePressure ...
	I0319 20:25:39.357794   55554 start.go:240] waiting for startup goroutines ...
	I0319 20:25:39.643030   55554 main.go:141] libmachine: Making call to close driver server
	I0319 20:25:39.643046   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .Close
	I0319 20:25:39.643101   55554 main.go:141] libmachine: Making call to close driver server
	I0319 20:25:39.643115   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .Close
	I0319 20:25:39.643353   55554 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:25:39.643364   55554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:25:39.643371   55554 main.go:141] libmachine: Making call to close driver server
	I0319 20:25:39.643378   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .Close
	I0319 20:25:39.643443   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | Closing plugin on server side
	I0319 20:25:39.643473   55554 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:25:39.643478   55554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:25:39.643484   55554 main.go:141] libmachine: Making call to close driver server
	I0319 20:25:39.643489   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .Close
	I0319 20:25:39.643597   55554 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:25:39.643617   55554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:25:39.643620   55554 main.go:141] libmachine: (cert-expiration-428153) DBG | Closing plugin on server side
	I0319 20:25:39.643716   55554 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:25:39.643724   55554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:25:39.657413   55554 main.go:141] libmachine: Making call to close driver server
	I0319 20:25:39.657423   55554 main.go:141] libmachine: (cert-expiration-428153) Calling .Close
	I0319 20:25:39.657715   55554 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:25:39.657724   55554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:25:39.659558   55554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0319 20:25:39.660835   55554 addons.go:505] duration metric: took 974.199596ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0319 20:25:39.819704   55554 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-428153" context rescaled to 1 replicas
	I0319 20:25:39.819738   55554 start.go:245] waiting for cluster config update ...
	I0319 20:25:39.819752   55554 start.go:254] writing updated cluster config ...
	I0319 20:25:39.820038   55554 ssh_runner.go:195] Run: rm -f paused
	I0319 20:25:39.869050   55554 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:25:39.870988   55554 out.go:177] * Done! kubectl is now configured to use "cert-expiration-428153" cluster and "default" namespace by default
	I0319 20:25:36.350814   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:36.351305   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:36.351325   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:36.351269   56546 retry.go:31] will retry after 4.231400763s: waiting for machine to come up
	I0319 20:25:40.584325   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:40.584739   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:40.584767   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:40.584692   56546 retry.go:31] will retry after 5.452618525s: waiting for machine to come up
	I0319 20:25:46.038847   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.039345   55982 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:25:46.039379   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.039389   55982 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:25:46.039719   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022
	I0319 20:25:46.112140   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:25:46.112168   55982 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:25:46.112182   55982 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:25:46.114965   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.115446   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.115479   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.115614   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:25:46.115639   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:25:46.115668   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:25:46.115689   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:25:46.115704   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:25:46.249138   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:25:46.249424   55982 main.go:141] libmachine: (old-k8s-version-159022) KVM machine creation complete!
	I0319 20:25:46.249807   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:25:46.250468   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:46.250692   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:46.250880   55982 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:25:46.250898   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:25:46.252136   55982 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:25:46.252148   55982 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:25:46.252153   55982 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:25:46.252159   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.254582   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.255005   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.255035   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.255178   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.255401   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.255543   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.255708   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.255914   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.256152   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.256167   55982 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:25:47.661830   56204 start.go:364] duration metric: took 52.116066675s to acquireMachinesLock for "no-preload-414130"
	I0319 20:25:47.661904   56204 start.go:93] Provisioning new machine with config: &{Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:25:47.662030   56204 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 20:25:46.372010   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:25:46.372034   55982 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:25:46.372044   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.374984   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.375360   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.375389   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.375524   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.375718   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.375883   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.376015   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.376152   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.376351   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.376367   55982 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:25:46.494038   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:25:46.494105   55982 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:25:46.494118   55982 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:25:46.494132   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:46.494390   55982 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:25:46.494419   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:46.494663   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.497351   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.497668   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.497697   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.497804   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.497978   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.498135   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.498270   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.498444   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.498603   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.498615   55982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:25:46.628943   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:25:46.628978   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.631797   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.632186   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.632209   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.632454   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.632664   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.632822   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.633003   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.633200   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.633371   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.633387   55982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:25:46.761522   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:25:46.761556   55982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:25:46.761592   55982 buildroot.go:174] setting up certificates
	I0319 20:25:46.761606   55982 provision.go:84] configureAuth start
	I0319 20:25:46.761617   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:46.761884   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:46.764704   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.765058   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.765089   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.765274   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.767545   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.767887   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.767923   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.768048   55982 provision.go:143] copyHostCerts
	I0319 20:25:46.768113   55982 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:25:46.768127   55982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:25:46.768208   55982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:25:46.768347   55982 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:25:46.768364   55982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:25:46.768397   55982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:25:46.768513   55982 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:25:46.768524   55982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:25:46.768553   55982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:25:46.768642   55982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:25:46.906018   55982 provision.go:177] copyRemoteCerts
	I0319 20:25:46.906097   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:25:46.906126   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.908825   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.909301   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.909328   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.909399   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.909611   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.909795   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.909933   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.001445   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:25:47.032497   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:25:47.062999   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:25:47.089974   55982 provision.go:87] duration metric: took 328.354781ms to configureAuth
	I0319 20:25:47.090004   55982 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:25:47.090215   55982 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:25:47.090290   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.092919   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.093301   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.093329   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.093482   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.093671   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.093834   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.094009   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.094188   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:47.094361   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:47.094389   55982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:25:47.389752   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:25:47.389778   55982 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:25:47.389787   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetURL
	I0319 20:25:47.391023   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using libvirt version 6000000
	I0319 20:25:47.393502   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.393873   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.393918   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.394056   55982 main.go:141] libmachine: Docker is up and running!
	I0319 20:25:47.394073   55982 main.go:141] libmachine: Reticulating splines...
	I0319 20:25:47.394081   55982 client.go:171] duration metric: took 27.359310019s to LocalClient.Create
	I0319 20:25:47.394110   55982 start.go:167] duration metric: took 27.359377921s to libmachine.API.Create "old-k8s-version-159022"
	I0319 20:25:47.394120   55982 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:25:47.394131   55982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:25:47.394146   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.394369   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:25:47.394393   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.396807   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.397109   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.397143   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.397295   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.397448   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.397599   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.397785   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.490331   55982 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:25:47.495270   55982 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:25:47.495296   55982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:25:47.495364   55982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:25:47.495431   55982 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:25:47.495517   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:25:47.508162   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:25:47.536042   55982 start.go:296] duration metric: took 141.908819ms for postStartSetup
	I0319 20:25:47.536094   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:25:47.536673   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:47.539443   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.539769   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.539793   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.540012   55982 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:25:47.540247   55982 start.go:128] duration metric: took 27.526518248s to createHost
	I0319 20:25:47.540294   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.542495   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.542838   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.542860   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.543017   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.543167   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.543310   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.543428   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.543574   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:47.543732   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:47.543745   55982 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:25:47.661671   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879947.645733085
	
	I0319 20:25:47.661695   55982 fix.go:216] guest clock: 1710879947.645733085
	I0319 20:25:47.661702   55982 fix.go:229] Guest: 2024-03-19 20:25:47.645733085 +0000 UTC Remote: 2024-03-19 20:25:47.540279791 +0000 UTC m=+61.265095128 (delta=105.453294ms)
	I0319 20:25:47.661719   55982 fix.go:200] guest clock delta is within tolerance: 105.453294ms
	I0319 20:25:47.661723   55982 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 27.648182773s
	I0319 20:25:47.661747   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.662043   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:47.665318   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.665745   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.665772   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.665924   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.666372   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.666626   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.666722   55982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:25:47.666765   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.666857   55982 ssh_runner.go:195] Run: cat /version.json
	I0319 20:25:47.666887   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.669670   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.669921   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.670174   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.670205   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.670519   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.670533   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.670614   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.670739   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.670765   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.670897   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.670944   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.671104   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.671131   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.671276   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.762533   55982 ssh_runner.go:195] Run: systemctl --version
	I0319 20:25:47.788473   55982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:25:47.954051   55982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:25:47.962242   55982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:25:47.962319   55982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:25:47.982103   55982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:25:47.982145   55982 start.go:494] detecting cgroup driver to use...
	I0319 20:25:47.982217   55982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:25:48.003993   55982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:25:48.021105   55982 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:25:48.021166   55982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:25:48.040411   55982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:25:48.058121   55982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:25:48.212569   55982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:25:48.390385   55982 docker.go:233] disabling docker service ...
	I0319 20:25:48.390453   55982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:25:48.408015   55982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:25:48.424140   55982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:25:48.566359   55982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:25:48.725824   55982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:25:48.754844   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:25:48.782110   55982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:25:48.782179   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.795927   55982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:25:48.795985   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.807923   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.819776   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.831705   55982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:25:48.845768   55982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:25:48.862637   55982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:25:48.862690   55982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:25:48.880204   55982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:25:48.894624   55982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:49.054962   55982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:25:49.214429   55982 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:25:49.214492   55982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:25:49.219878   55982 start.go:562] Will wait 60s for crictl version
	I0319 20:25:49.219932   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:49.224417   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:25:49.275717   55982 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:25:49.275794   55982 ssh_runner.go:195] Run: crio --version
	I0319 20:25:49.313888   55982 ssh_runner.go:195] Run: crio --version
	I0319 20:25:49.358265   55982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:25:47.664352   56204 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 20:25:47.664550   56204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:47.664598   56204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:47.681819   56204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0319 20:25:47.682239   56204 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:47.682752   56204 main.go:141] libmachine: Using API Version  1
	I0319 20:25:47.682776   56204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:47.683159   56204 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:47.683368   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:25:47.683527   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:25:47.683676   56204 start.go:159] libmachine.API.Create for "no-preload-414130" (driver="kvm2")
	I0319 20:25:47.683715   56204 client.go:168] LocalClient.Create starting
	I0319 20:25:47.683752   56204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 20:25:47.683789   56204 main.go:141] libmachine: Decoding PEM data...
	I0319 20:25:47.683812   56204 main.go:141] libmachine: Parsing certificate...
	I0319 20:25:47.683965   56204 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 20:25:47.684015   56204 main.go:141] libmachine: Decoding PEM data...
	I0319 20:25:47.684037   56204 main.go:141] libmachine: Parsing certificate...
	I0319 20:25:47.684063   56204 main.go:141] libmachine: Running pre-create checks...
	I0319 20:25:47.684079   56204 main.go:141] libmachine: (no-preload-414130) Calling .PreCreateCheck
	I0319 20:25:47.684515   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:25:47.684910   56204 main.go:141] libmachine: Creating machine...
	I0319 20:25:47.684925   56204 main.go:141] libmachine: (no-preload-414130) Calling .Create
	I0319 20:25:47.685053   56204 main.go:141] libmachine: (no-preload-414130) Creating KVM machine...
	I0319 20:25:47.686109   56204 main.go:141] libmachine: (no-preload-414130) DBG | found existing default KVM network
	I0319 20:25:47.687627   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:47.687468   56810 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:20:18} reservation:<nil>}
	I0319 20:25:47.688640   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:47.688540   56810 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:58:8d} reservation:<nil>}
	I0319 20:25:47.689838   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:47.689747   56810 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:37:30:13} reservation:<nil>}
	I0319 20:25:47.691201   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:47.691113   56810 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3980}
	I0319 20:25:47.691242   56204 main.go:141] libmachine: (no-preload-414130) DBG | created network xml: 
	I0319 20:25:47.691258   56204 main.go:141] libmachine: (no-preload-414130) DBG | <network>
	I0319 20:25:47.691274   56204 main.go:141] libmachine: (no-preload-414130) DBG |   <name>mk-no-preload-414130</name>
	I0319 20:25:47.691288   56204 main.go:141] libmachine: (no-preload-414130) DBG |   <dns enable='no'/>
	I0319 20:25:47.691298   56204 main.go:141] libmachine: (no-preload-414130) DBG |   
	I0319 20:25:47.691305   56204 main.go:141] libmachine: (no-preload-414130) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0319 20:25:47.691316   56204 main.go:141] libmachine: (no-preload-414130) DBG |     <dhcp>
	I0319 20:25:47.691327   56204 main.go:141] libmachine: (no-preload-414130) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0319 20:25:47.691335   56204 main.go:141] libmachine: (no-preload-414130) DBG |     </dhcp>
	I0319 20:25:47.691343   56204 main.go:141] libmachine: (no-preload-414130) DBG |   </ip>
	I0319 20:25:47.691347   56204 main.go:141] libmachine: (no-preload-414130) DBG |   
	I0319 20:25:47.691357   56204 main.go:141] libmachine: (no-preload-414130) DBG | </network>
	I0319 20:25:47.691364   56204 main.go:141] libmachine: (no-preload-414130) DBG | 
	I0319 20:25:47.696634   56204 main.go:141] libmachine: (no-preload-414130) DBG | trying to create private KVM network mk-no-preload-414130 192.168.72.0/24...
	I0319 20:25:47.768341   56204 main.go:141] libmachine: (no-preload-414130) DBG | private KVM network mk-no-preload-414130 192.168.72.0/24 created
	I0319 20:25:47.768372   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:47.768325   56810 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:47.768394   56204 main.go:141] libmachine: (no-preload-414130) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130 ...
	I0319 20:25:47.768407   56204 main.go:141] libmachine: (no-preload-414130) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 20:25:47.768441   56204 main.go:141] libmachine: (no-preload-414130) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 20:25:47.990924   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:47.990803   56810 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa...
	I0319 20:25:48.162762   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:48.162595   56810 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/no-preload-414130.rawdisk...
	I0319 20:25:48.162794   56204 main.go:141] libmachine: (no-preload-414130) DBG | Writing magic tar header
	I0319 20:25:48.162814   56204 main.go:141] libmachine: (no-preload-414130) DBG | Writing SSH key tar header
	I0319 20:25:48.162828   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:48.162722   56810 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130 ...
	I0319 20:25:48.162843   56204 main.go:141] libmachine: (no-preload-414130) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130 (perms=drwx------)
	I0319 20:25:48.162861   56204 main.go:141] libmachine: (no-preload-414130) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 20:25:48.162874   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130
	I0319 20:25:48.162895   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 20:25:48.162909   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:48.162923   56204 main.go:141] libmachine: (no-preload-414130) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 20:25:48.162936   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 20:25:48.162951   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 20:25:48.162970   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home/jenkins
	I0319 20:25:48.162982   56204 main.go:141] libmachine: (no-preload-414130) DBG | Checking permissions on dir: /home
	I0319 20:25:48.162990   56204 main.go:141] libmachine: (no-preload-414130) DBG | Skipping /home - not owner
	I0319 20:25:48.163000   56204 main.go:141] libmachine: (no-preload-414130) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 20:25:48.163015   56204 main.go:141] libmachine: (no-preload-414130) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 20:25:48.163026   56204 main.go:141] libmachine: (no-preload-414130) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 20:25:48.163039   56204 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:25:48.164156   56204 main.go:141] libmachine: (no-preload-414130) define libvirt domain using xml: 
	I0319 20:25:48.164187   56204 main.go:141] libmachine: (no-preload-414130) <domain type='kvm'>
	I0319 20:25:48.164198   56204 main.go:141] libmachine: (no-preload-414130)   <name>no-preload-414130</name>
	I0319 20:25:48.164205   56204 main.go:141] libmachine: (no-preload-414130)   <memory unit='MiB'>2200</memory>
	I0319 20:25:48.164214   56204 main.go:141] libmachine: (no-preload-414130)   <vcpu>2</vcpu>
	I0319 20:25:48.164221   56204 main.go:141] libmachine: (no-preload-414130)   <features>
	I0319 20:25:48.164241   56204 main.go:141] libmachine: (no-preload-414130)     <acpi/>
	I0319 20:25:48.164252   56204 main.go:141] libmachine: (no-preload-414130)     <apic/>
	I0319 20:25:48.164275   56204 main.go:141] libmachine: (no-preload-414130)     <pae/>
	I0319 20:25:48.164285   56204 main.go:141] libmachine: (no-preload-414130)     
	I0319 20:25:48.164296   56204 main.go:141] libmachine: (no-preload-414130)   </features>
	I0319 20:25:48.164301   56204 main.go:141] libmachine: (no-preload-414130)   <cpu mode='host-passthrough'>
	I0319 20:25:48.164306   56204 main.go:141] libmachine: (no-preload-414130)   
	I0319 20:25:48.164325   56204 main.go:141] libmachine: (no-preload-414130)   </cpu>
	I0319 20:25:48.164333   56204 main.go:141] libmachine: (no-preload-414130)   <os>
	I0319 20:25:48.164339   56204 main.go:141] libmachine: (no-preload-414130)     <type>hvm</type>
	I0319 20:25:48.164347   56204 main.go:141] libmachine: (no-preload-414130)     <boot dev='cdrom'/>
	I0319 20:25:48.164360   56204 main.go:141] libmachine: (no-preload-414130)     <boot dev='hd'/>
	I0319 20:25:48.164375   56204 main.go:141] libmachine: (no-preload-414130)     <bootmenu enable='no'/>
	I0319 20:25:48.164379   56204 main.go:141] libmachine: (no-preload-414130)   </os>
	I0319 20:25:48.164384   56204 main.go:141] libmachine: (no-preload-414130)   <devices>
	I0319 20:25:48.164392   56204 main.go:141] libmachine: (no-preload-414130)     <disk type='file' device='cdrom'>
	I0319 20:25:48.164404   56204 main.go:141] libmachine: (no-preload-414130)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/boot2docker.iso'/>
	I0319 20:25:48.164414   56204 main.go:141] libmachine: (no-preload-414130)       <target dev='hdc' bus='scsi'/>
	I0319 20:25:48.164419   56204 main.go:141] libmachine: (no-preload-414130)       <readonly/>
	I0319 20:25:48.164425   56204 main.go:141] libmachine: (no-preload-414130)     </disk>
	I0319 20:25:48.164460   56204 main.go:141] libmachine: (no-preload-414130)     <disk type='file' device='disk'>
	I0319 20:25:48.164487   56204 main.go:141] libmachine: (no-preload-414130)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 20:25:48.164510   56204 main.go:141] libmachine: (no-preload-414130)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/no-preload-414130.rawdisk'/>
	I0319 20:25:48.164522   56204 main.go:141] libmachine: (no-preload-414130)       <target dev='hda' bus='virtio'/>
	I0319 20:25:48.164531   56204 main.go:141] libmachine: (no-preload-414130)     </disk>
	I0319 20:25:48.164538   56204 main.go:141] libmachine: (no-preload-414130)     <interface type='network'>
	I0319 20:25:48.164547   56204 main.go:141] libmachine: (no-preload-414130)       <source network='mk-no-preload-414130'/>
	I0319 20:25:48.164554   56204 main.go:141] libmachine: (no-preload-414130)       <model type='virtio'/>
	I0319 20:25:48.164578   56204 main.go:141] libmachine: (no-preload-414130)     </interface>
	I0319 20:25:48.164590   56204 main.go:141] libmachine: (no-preload-414130)     <interface type='network'>
	I0319 20:25:48.164604   56204 main.go:141] libmachine: (no-preload-414130)       <source network='default'/>
	I0319 20:25:48.164616   56204 main.go:141] libmachine: (no-preload-414130)       <model type='virtio'/>
	I0319 20:25:48.164627   56204 main.go:141] libmachine: (no-preload-414130)     </interface>
	I0319 20:25:48.164639   56204 main.go:141] libmachine: (no-preload-414130)     <serial type='pty'>
	I0319 20:25:48.164651   56204 main.go:141] libmachine: (no-preload-414130)       <target port='0'/>
	I0319 20:25:48.164683   56204 main.go:141] libmachine: (no-preload-414130)     </serial>
	I0319 20:25:48.164710   56204 main.go:141] libmachine: (no-preload-414130)     <console type='pty'>
	I0319 20:25:48.164723   56204 main.go:141] libmachine: (no-preload-414130)       <target type='serial' port='0'/>
	I0319 20:25:48.164738   56204 main.go:141] libmachine: (no-preload-414130)     </console>
	I0319 20:25:48.164750   56204 main.go:141] libmachine: (no-preload-414130)     <rng model='virtio'>
	I0319 20:25:48.164763   56204 main.go:141] libmachine: (no-preload-414130)       <backend model='random'>/dev/random</backend>
	I0319 20:25:48.164774   56204 main.go:141] libmachine: (no-preload-414130)     </rng>
	I0319 20:25:48.164783   56204 main.go:141] libmachine: (no-preload-414130)     
	I0319 20:25:48.164789   56204 main.go:141] libmachine: (no-preload-414130)     
	I0319 20:25:48.164798   56204 main.go:141] libmachine: (no-preload-414130)   </devices>
	I0319 20:25:48.164817   56204 main.go:141] libmachine: (no-preload-414130) </domain>
	I0319 20:25:48.164834   56204 main.go:141] libmachine: (no-preload-414130) 
	I0319 20:25:48.172043   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:bb:28:ec in network default
	I0319 20:25:48.172560   56204 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:25:48.172584   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:48.173222   56204 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:25:48.173500   56204 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:25:48.174009   56204 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:25:48.174688   56204 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:25:49.569590   56204 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:25:49.570737   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:49.571181   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:49.571227   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:49.571154   56810 retry.go:31] will retry after 291.933811ms: waiting for machine to come up
	I0319 20:25:49.864662   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:49.865299   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:49.865338   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:49.865267   56810 retry.go:31] will retry after 323.996034ms: waiting for machine to come up
	I0319 20:25:50.190863   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:50.191484   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:50.191530   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:50.191445   56810 retry.go:31] will retry after 402.080464ms: waiting for machine to come up
	I0319 20:25:49.359618   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:49.362790   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:49.363262   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:49.363293   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:49.363591   55982 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:25:49.369067   55982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:25:49.385009   55982 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:25:49.385178   55982 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:25:49.385239   55982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:25:49.435514   55982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:25:49.435615   55982 ssh_runner.go:195] Run: which lz4
	I0319 20:25:49.441041   55982 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:25:49.446341   55982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:25:49.446371   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:25:50.595020   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:50.595701   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:50.595736   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:50.595646   56810 retry.go:31] will retry after 504.680377ms: waiting for machine to come up
	I0319 20:25:51.102791   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:51.103393   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:51.103429   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:51.103330   56810 retry.go:31] will retry after 616.43164ms: waiting for machine to come up
	I0319 20:25:51.721792   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:51.722503   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:51.722544   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:51.722413   56810 retry.go:31] will retry after 594.676593ms: waiting for machine to come up
	I0319 20:25:52.318881   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:52.319472   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:52.319509   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:52.319401   56810 retry.go:31] will retry after 906.493251ms: waiting for machine to come up
	I0319 20:25:53.227979   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:53.228630   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:53.228652   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:53.228541   56810 retry.go:31] will retry after 1.021927413s: waiting for machine to come up
	I0319 20:25:54.251782   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:54.252363   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:54.252399   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:54.252303   56810 retry.go:31] will retry after 1.145152074s: waiting for machine to come up
	I0319 20:25:55.398829   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:55.399373   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:55.399406   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:55.399307   56810 retry.go:31] will retry after 1.969853802s: waiting for machine to come up
	I0319 20:25:51.708334   55982 crio.go:462] duration metric: took 2.267338917s to copy over tarball
	I0319 20:25:51.708422   55982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:25:54.900870   55982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.192420858s)
	I0319 20:25:54.900899   55982 crio.go:469] duration metric: took 3.192525975s to extract the tarball
	I0319 20:25:54.900908   55982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:25:54.945654   55982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:25:54.997150   55982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:25:54.997184   55982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:25:54.997296   55982 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:54.997579   55982 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:25:54.997591   55982 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:54.997620   55982 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:54.997723   55982 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:54.997743   55982 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:54.997837   55982 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:25:54.997728   55982 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:54.998942   55982 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:54.999189   55982 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:54.999209   55982 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:54.999212   55982 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:54.999217   55982 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:25:54.999298   55982 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:54.999308   55982 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:54.999788   55982 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.154901   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:55.178995   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.194425   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:55.207142   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:25:55.213949   55982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:25:55.213982   55982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:55.214022   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.215693   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:55.249431   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:55.295990   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:55.300329   55982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:25:55.300435   55982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.300501   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.300329   55982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:25:55.300546   55982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:55.300605   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.346880   55982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:25:55.346948   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:55.346976   55982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:25:55.347016   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.369140   55982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:25:55.369184   55982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:55.369244   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.406307   55982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:25:55.406354   55982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:55.406381   55982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:25:55.406406   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.406422   55982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:55.406467   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.406483   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:55.406411   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.437222   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:25:55.437309   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:25:55.437341   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:55.517473   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:25:55.517517   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:55.517577   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:55.517723   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:25:55.553853   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:25:55.561362   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:25:55.598715   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:25:55.598735   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:25:55.931464   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:56.081504   55982 cache_images.go:92] duration metric: took 1.084303153s to LoadCachedImages
	W0319 20:25:56.081622   55982 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0319 20:25:56.081643   55982 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:25:56.081776   55982 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:25:56.081863   55982 ssh_runner.go:195] Run: crio config
	I0319 20:25:56.139415   55982 cni.go:84] Creating CNI manager for ""
	I0319 20:25:56.139438   55982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:25:56.139450   55982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:25:56.139467   55982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:25:56.139666   55982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:25:56.139756   55982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:25:56.151535   55982 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:25:56.151615   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:25:56.162688   55982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:25:56.183525   55982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:25:56.202662   55982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:25:56.222091   55982 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:25:56.226617   55982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:25:56.240650   55982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:56.387390   55982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:25:56.579447   55982 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:25:56.579475   55982 certs.go:194] generating shared ca certs ...
	I0319 20:25:56.579495   55982 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.579653   55982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:25:56.579726   55982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:25:56.579755   55982 certs.go:256] generating profile certs ...
	I0319 20:25:56.579866   55982 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:25:56.579886   55982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt with IP's: []
	I0319 20:25:56.671840   55982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt ...
	I0319 20:25:56.671869   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: {Name:mk61c72fce9c679651e7a9e1decdd5e5de4586de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.672041   55982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key ...
	I0319 20:25:56.672059   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key: {Name:mk4f4dae15ac74583de7977e4327a5ef8cb539c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.672176   55982 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:25:56.672196   55982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.28]
	I0319 20:25:56.842042   55982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4 ...
	I0319 20:25:56.842071   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4: {Name:mkca57b1829959d481c3f86e2a09caa6cc12fe28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.842247   55982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4 ...
	I0319 20:25:56.842263   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4: {Name:mk2b34b2cf48017b7663a3ecf0d75ff3f102a0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.842355   55982 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt
	I0319 20:25:56.842468   55982 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key
	I0319 20:25:56.842550   55982 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:25:56.842576   55982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt with IP's: []
	I0319 20:25:57.176440   55982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt ...
	I0319 20:25:57.176468   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt: {Name:mk6cfea44b706848d0ef5c66bf794a61fff6c263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:57.176654   55982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key ...
	I0319 20:25:57.176673   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key: {Name:mkac062ea72fce040eaa4b5cd499c8a1bc2a3b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:57.176863   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:25:57.176914   55982 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:25:57.176924   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:25:57.176943   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:25:57.176967   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:25:57.176990   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:25:57.177026   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:25:57.177598   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:25:57.214307   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:25:57.242380   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:25:57.270810   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:25:57.298840   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:25:57.327424   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:25:57.354944   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:25:57.383921   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:25:57.415824   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:25:57.446943   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:25:57.498524   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:25:57.524299   55982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:25:57.548071   55982 ssh_runner.go:195] Run: openssl version
	I0319 20:25:57.555174   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:25:57.568658   55982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:25:57.573925   55982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:25:57.573975   55982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:25:57.580585   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:25:57.593587   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:25:57.606796   55982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:25:57.612138   55982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:25:57.612195   55982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:25:57.619084   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:25:57.632991   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:25:57.645667   55982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:57.650822   55982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:57.650871   55982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:57.657697   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:25:57.671652   55982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:25:57.678066   55982 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 20:25:57.678131   55982 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:25:57.678227   55982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:25:57.678281   55982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:25:57.719280   55982 cri.go:89] found id: ""
	I0319 20:25:57.719367   55982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 20:25:57.731073   55982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:25:57.742226   55982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:25:57.755538   55982 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:25:57.755557   55982 kubeadm.go:156] found existing configuration files:
	
	I0319 20:25:57.755603   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:25:57.767830   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:25:57.767899   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:25:57.778642   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:25:57.790544   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:25:57.790619   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:25:57.802730   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:25:57.813705   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:25:57.813756   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:25:57.825964   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:25:57.837099   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:25:57.837150   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:25:57.848351   55982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:25:57.987481   55982 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:25:57.987556   55982 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:25:58.185352   55982 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:25:58.185497   55982 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:25:58.185618   55982 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:25:58.461759   55982 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:25:57.371458   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:25:57.372014   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:25:57.372035   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:25:57.371941   56810 retry.go:31] will retry after 2.802320058s: waiting for machine to come up
	I0319 20:26:00.177425   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:00.177985   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:26:00.178013   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:26:00.177937   56810 retry.go:31] will retry after 3.584612311s: waiting for machine to come up
	I0319 20:25:58.464609   55982 out.go:204]   - Generating certificates and keys ...
	I0319 20:25:58.464729   55982 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:25:58.464818   55982 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:25:58.819523   55982 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 20:25:59.010342   55982 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 20:25:59.289340   55982 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 20:25:59.481304   55982 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 20:25:59.772779   55982 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 20:25:59.772969   55982 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	I0319 20:26:00.302141   55982 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 20:26:00.302338   55982 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	I0319 20:26:00.676075   55982 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 20:26:00.924844   55982 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 20:26:01.100878   55982 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 20:26:01.100971   55982 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:26:01.215620   55982 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:26:01.525378   55982 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:26:01.790567   55982 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:26:02.183004   55982 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:26:02.199227   55982 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:26:02.200349   55982 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:26:02.200402   55982 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:26:02.346678   55982 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:26:03.764013   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:03.764533   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:26:03.764564   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:26:03.764479   56810 retry.go:31] will retry after 2.827373203s: waiting for machine to come up
	I0319 20:26:02.348574   55982 out.go:204]   - Booting up control plane ...
	I0319 20:26:02.348716   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:26:02.356184   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:26:02.357288   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:26:02.358196   55982 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:26:02.362557   55982 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:26:06.593745   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:06.594157   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:26:06.594180   56204 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:26:06.594110   56810 retry.go:31] will retry after 3.876764964s: waiting for machine to come up
	I0319 20:26:12.190266   56472 start.go:364] duration metric: took 56.70973361s to acquireMachinesLock for "kubernetes-upgrade-853797"
	I0319 20:26:12.190320   56472 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:26:12.190362   56472 fix.go:54] fixHost starting: 
	I0319 20:26:12.190783   56472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:26:12.190832   56472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:26:12.207934   56472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
	I0319 20:26:12.208346   56472 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:26:12.208893   56472 main.go:141] libmachine: Using API Version  1
	I0319 20:26:12.208919   56472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:26:12.209238   56472 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:26:12.209435   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:12.209589   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetState
	I0319 20:26:12.211256   56472 fix.go:112] recreateIfNeeded on kubernetes-upgrade-853797: state=Running err=<nil>
	W0319 20:26:12.211274   56472 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:26:12.213189   56472 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-853797" VM ...
	I0319 20:26:10.474487   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.475952   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.475966   56204 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:26:10.475980   56204 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:26:10.476001   56204 main.go:141] libmachine: (no-preload-414130) DBG | unable to find host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130
	I0319 20:26:10.547730   56204 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:26:10.547757   56204 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:26:10.547771   56204 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:26:10.550727   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.551268   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:10.551301   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.551457   56204 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:26:10.551486   56204 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:26:10.551544   56204 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:26:10.551566   56204 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:26:10.551601   56204 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:26:10.680224   56204 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:26:10.680516   56204 main.go:141] libmachine: (no-preload-414130) KVM machine creation complete!
	I0319 20:26:10.680899   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:26:10.681425   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:10.681631   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:10.681814   56204 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:26:10.681834   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:26:10.682887   56204 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:26:10.682899   56204 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:26:10.682904   56204 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:26:10.682910   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:10.684887   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.685280   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:10.685309   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.685450   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:10.685642   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:10.685819   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:10.685966   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:10.686130   56204 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:10.686314   56204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:26:10.686324   56204 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:26:10.796058   56204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:26:10.796109   56204 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:26:10.796122   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:10.798779   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.799101   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:10.799132   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.799261   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:10.799445   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:10.799600   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:10.799751   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:10.799892   56204 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:10.800057   56204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:26:10.800069   56204 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:26:10.913677   56204 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:26:10.913762   56204 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:26:10.913773   56204 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:26:10.913784   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:26:10.914031   56204 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:26:10.914049   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:26:10.914270   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:10.916780   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.917198   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:10.917231   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:10.917367   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:10.917549   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:10.917726   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:10.917898   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:10.918085   56204 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:10.918282   56204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:26:10.918295   56204 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:26:11.044130   56204 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:26:11.044157   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:11.046755   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.047137   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.047163   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.047322   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:11.047556   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:11.047710   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:11.047891   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:11.048032   56204 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:11.048253   56204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:26:11.048292   56204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:26:11.169929   56204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:26:11.169958   56204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:26:11.169982   56204 buildroot.go:174] setting up certificates
	I0319 20:26:11.169996   56204 provision.go:84] configureAuth start
	I0319 20:26:11.170019   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:26:11.170341   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:26:11.173467   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.173800   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.173836   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.173989   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:11.176189   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.176475   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.176501   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.176652   56204 provision.go:143] copyHostCerts
	I0319 20:26:11.176712   56204 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:26:11.176722   56204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:26:11.176775   56204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:26:11.176936   56204 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:26:11.176949   56204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:26:11.176979   56204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:26:11.177084   56204 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:26:11.177094   56204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:26:11.177121   56204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:26:11.177200   56204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:26:11.452765   56204 provision.go:177] copyRemoteCerts
	I0319 20:26:11.452821   56204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:26:11.452846   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:11.455338   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.455651   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.455680   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.455848   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:11.456073   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:11.456270   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:11.456424   56204 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:26:11.543657   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:26:11.572049   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:26:11.599843   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:26:11.626840   56204 provision.go:87] duration metric: took 456.832602ms to configureAuth
	I0319 20:26:11.626878   56204 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:26:11.627065   56204 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:26:11.627135   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:11.629878   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.630287   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.630321   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.630484   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:11.630685   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:11.630822   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:11.631009   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:11.631159   56204 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:11.631321   56204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:26:11.631335   56204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:26:11.923448   56204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:26:11.923500   56204 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:26:11.923512   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetURL
	I0319 20:26:11.924807   56204 main.go:141] libmachine: (no-preload-414130) DBG | Using libvirt version 6000000
	I0319 20:26:11.926936   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.927262   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.927288   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.927535   56204 main.go:141] libmachine: Docker is up and running!
	I0319 20:26:11.927550   56204 main.go:141] libmachine: Reticulating splines...
	I0319 20:26:11.927556   56204 client.go:171] duration metric: took 24.243829898s to LocalClient.Create
	I0319 20:26:11.927577   56204 start.go:167] duration metric: took 24.24390473s to libmachine.API.Create "no-preload-414130"
	I0319 20:26:11.927584   56204 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:26:11.927595   56204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:26:11.927607   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:11.927810   56204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:26:11.927838   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:11.929861   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.930201   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:11.930232   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:11.930340   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:11.930514   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:11.930679   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:11.930842   56204 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:26:12.016847   56204 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:26:12.021921   56204 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:26:12.021947   56204 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:26:12.022022   56204 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:26:12.022142   56204 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:26:12.022247   56204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:26:12.032440   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:26:12.061284   56204 start.go:296] duration metric: took 133.687466ms for postStartSetup
	I0319 20:26:12.061345   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:26:12.061962   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:26:12.065552   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.065925   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:12.065952   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.066198   56204 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:26:12.066378   56204 start.go:128] duration metric: took 24.404334736s to createHost
	I0319 20:26:12.066408   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:12.068502   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.068822   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:12.068851   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.068995   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:12.069189   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:12.069335   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:12.069466   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:12.069655   56204 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:12.069839   56204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:26:12.069851   56204 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:26:12.190064   56204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879972.167532952
	
	I0319 20:26:12.190090   56204 fix.go:216] guest clock: 1710879972.167532952
	I0319 20:26:12.190100   56204 fix.go:229] Guest: 2024-03-19 20:26:12.167532952 +0000 UTC Remote: 2024-03-19 20:26:12.066396329 +0000 UTC m=+76.637375425 (delta=101.136623ms)
	I0319 20:26:12.190158   56204 fix.go:200] guest clock delta is within tolerance: 101.136623ms
	I0319 20:26:12.190168   56204 start.go:83] releasing machines lock for "no-preload-414130", held for 24.52830724s
	I0319 20:26:12.190203   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:12.190521   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:26:12.193118   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.193537   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:12.193567   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.193766   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:12.194303   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:12.194520   56204 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:26:12.194589   56204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:26:12.194643   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:12.194708   56204 ssh_runner.go:195] Run: cat /version.json
	I0319 20:26:12.194727   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:26:12.197659   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.197972   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.198119   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:12.198150   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.198268   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:12.198382   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:12.198435   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:12.198456   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:12.198616   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:12.198656   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:26:12.198773   56204 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:26:12.198990   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:26:12.199154   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:26:12.199294   56204 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:26:12.282725   56204 ssh_runner.go:195] Run: systemctl --version
	I0319 20:26:12.305968   56204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:26:12.478754   56204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:26:12.485754   56204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:26:12.485817   56204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:26:12.507023   56204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:26:12.507046   56204 start.go:494] detecting cgroup driver to use...
	I0319 20:26:12.507111   56204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:26:12.531004   56204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:26:12.549133   56204 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:26:12.549194   56204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:26:12.567539   56204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:26:12.586145   56204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:26:12.724819   56204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:26:12.896828   56204 docker.go:233] disabling docker service ...
	I0319 20:26:12.896882   56204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:26:12.914013   56204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:26:12.932765   56204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:26:13.085235   56204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:26:13.229257   56204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:26:13.248022   56204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:26:13.269506   56204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:26:13.269564   56204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.280787   56204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:26:13.280867   56204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.291928   56204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.302586   56204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.313724   56204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:26:13.324880   56204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.336119   56204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.355456   56204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:13.366592   56204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:26:13.376506   56204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:26:13.376564   56204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:26:13.391640   56204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:26:13.403511   56204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:26:13.530569   56204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:26:13.689293   56204 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:26:13.689372   56204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:26:13.694682   56204 start.go:562] Will wait 60s for crictl version
	I0319 20:26:13.694742   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:13.698837   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:26:13.753901   56204 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:26:13.753988   56204 ssh_runner.go:195] Run: crio --version
	I0319 20:26:13.795242   56204 ssh_runner.go:195] Run: crio --version
	I0319 20:26:13.828884   56204 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:26:12.214599   56472 machine.go:94] provisionDockerMachine start ...
	I0319 20:26:12.214624   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:12.214823   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:12.217754   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.218255   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.218315   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.218427   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:12.218599   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.218793   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.218979   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:12.219168   56472 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:12.219421   56472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:26:12.219436   56472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:26:12.326092   56472 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-853797
	
	I0319 20:26:12.326122   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:26:12.326483   56472 buildroot.go:166] provisioning hostname "kubernetes-upgrade-853797"
	I0319 20:26:12.326513   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:26:12.326731   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:12.329617   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.330016   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.330057   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.330281   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:12.330478   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.330659   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.330799   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:12.330977   56472 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:12.331193   56472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:26:12.331212   56472 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-853797 && echo "kubernetes-upgrade-853797" | sudo tee /etc/hostname
	I0319 20:26:12.463165   56472 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-853797
	
	I0319 20:26:12.463190   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:12.466423   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.466810   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.466845   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.467028   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:12.467236   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.467403   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.467557   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:12.467734   56472 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:12.467890   56472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:26:12.467907   56472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-853797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-853797/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-853797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:26:12.573842   56472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:26:12.573867   56472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:26:12.573888   56472 buildroot.go:174] setting up certificates
	I0319 20:26:12.573896   56472 provision.go:84] configureAuth start
	I0319 20:26:12.573905   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetMachineName
	I0319 20:26:12.574161   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:26:12.577226   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.577673   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.577708   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.577885   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:12.580058   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.580587   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.580612   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.581709   56472 provision.go:143] copyHostCerts
	I0319 20:26:12.581772   56472 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:26:12.581782   56472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:26:12.581833   56472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:26:12.581951   56472 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:26:12.581964   56472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:26:12.581996   56472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:26:12.582102   56472 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:26:12.582122   56472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:26:12.582155   56472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:26:12.582223   56472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-853797 san=[127.0.0.1 192.168.50.116 kubernetes-upgrade-853797 localhost minikube]
	I0319 20:26:12.746778   56472 provision.go:177] copyRemoteCerts
	I0319 20:26:12.746827   56472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:26:12.746850   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:12.749533   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.749938   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.749965   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.750171   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:12.750369   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.750517   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:12.750681   56472 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:26:12.835657   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:26:12.870658   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:26:12.902664   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0319 20:26:12.936142   56472 provision.go:87] duration metric: took 362.234632ms to configureAuth
	I0319 20:26:12.936173   56472 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:26:12.936448   56472 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:26:12.936541   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:12.939739   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.940173   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:12.940210   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:12.940370   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:12.940595   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.940781   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:12.940933   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:12.941086   56472 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:12.941254   56472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:26:12.941269   56472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:26:13.830225   56204 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:26:13.833057   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:13.833425   56204 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:26:13.833450   56204 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:26:13.833681   56204 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:26:13.838572   56204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:26:13.852998   56204 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:26:13.853094   56204 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:26:13.853124   56204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:26:13.892685   56204 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:26:13.892708   56204 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:26:13.892771   56204 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:26:13.892786   56204 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:26:13.892822   56204 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:26:13.892830   56204 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:26:13.892863   56204 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:26:13.892836   56204 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:26:13.892923   56204 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:26:13.892891   56204 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:26:13.894178   56204 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:26:13.894185   56204 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:26:13.894178   56204 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:26:13.894179   56204 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:26:13.894230   56204 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:26:13.894241   56204 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:26:13.894178   56204 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:26:13.894211   56204 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:26:14.017578   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:26:14.034528   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:26:14.039506   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:26:14.048637   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:26:14.056283   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:26:14.063126   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:26:14.077928   56204 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0319 20:26:14.077961   56204 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0319 20:26:14.077998   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.143957   56204 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:26:14.144005   56204 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:26:14.144057   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.150914   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:26:14.159086   56204 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:26:14.159125   56204 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:26:14.159180   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.190642   56204 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:26:14.190688   56204 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:26:14.190740   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.203287   56204 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:26:14.203327   56204 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:26:14.203383   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.206735   56204 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:26:14.206775   56204 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:26:14.206819   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.206823   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0319 20:26:14.206860   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:26:14.240530   56204 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:26:14.240585   56204 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:26:14.240598   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:26:14.240625   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:14.240641   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:26:14.240678   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:26:14.294589   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0319 20:26:14.294604   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:26:14.294691   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0319 20:26:14.303284   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:26:14.303402   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:26:14.356214   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:26:14.356281   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:26:14.356380   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:26:14.375775   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:26:14.375834   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:26:14.375879   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:26:14.375923   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:26:14.396876   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:26:14.396961   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0319 20:26:14.396995   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:26:14.396997   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0319 20:26:14.396995   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0': No such file or directory
	I0319 20:26:14.397039   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (32648704 bytes)
	I0319 20:26:14.477750   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0': No such file or directory
	I0319 20:26:14.477792   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (19201536 bytes)
	I0319 20:26:14.477824   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:26:14.477863   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0-beta.0': No such file or directory
	I0319 20:26:14.477883   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0': No such file or directory
	I0319 20:26:14.477894   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (29017088 bytes)
	I0319 20:26:14.477900   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (31010816 bytes)
	I0319 20:26:14.477938   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:26:14.477949   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0319 20:26:14.477964   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0319 20:26:14.516556   56204 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0319 20:26:14.516590   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0319 20:26:14.516615   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0319 20:26:14.516621   56204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0319 20:26:14.798999   56204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:26:14.935703   56204 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0319 20:26:14.935738   56204 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:26:14.935795   56204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:26:15.020870   56204 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:26:15.020914   56204 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:26:15.020966   56204 ssh_runner.go:195] Run: which crictl
	I0319 20:26:19.114632   56472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:26:19.114664   56472 machine.go:97] duration metric: took 6.900048554s to provisionDockerMachine
	I0319 20:26:19.114678   56472 start.go:293] postStartSetup for "kubernetes-upgrade-853797" (driver="kvm2")
	I0319 20:26:19.114694   56472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:26:19.114714   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:19.115095   56472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:26:19.115126   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:19.118347   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.118814   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:19.118842   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.119090   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:19.119290   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:19.119465   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:19.119635   56472 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:26:19.204523   56472 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:26:19.209824   56472 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:26:19.209853   56472 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:26:19.209941   56472 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:26:19.210064   56472 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:26:19.210199   56472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:26:19.225890   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:26:19.259555   56472 start.go:296] duration metric: took 144.863149ms for postStartSetup
	I0319 20:26:19.259597   56472 fix.go:56] duration metric: took 7.069266018s for fixHost
	I0319 20:26:19.259620   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:19.262809   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.263245   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:19.263301   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.263482   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:19.263724   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:19.263914   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:19.264027   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:19.264168   56472 main.go:141] libmachine: Using SSH client type: native
	I0319 20:26:19.264379   56472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I0319 20:26:19.264392   56472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:26:19.378587   56472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879979.372523746
	
	I0319 20:26:19.378614   56472 fix.go:216] guest clock: 1710879979.372523746
	I0319 20:26:19.378624   56472 fix.go:229] Guest: 2024-03-19 20:26:19.372523746 +0000 UTC Remote: 2024-03-19 20:26:19.25960198 +0000 UTC m=+63.930841041 (delta=112.921766ms)
	I0319 20:26:19.378649   56472 fix.go:200] guest clock delta is within tolerance: 112.921766ms
	I0319 20:26:19.378655   56472 start.go:83] releasing machines lock for "kubernetes-upgrade-853797", held for 7.18835851s
	I0319 20:26:19.378680   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:19.378973   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:26:19.382291   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.382648   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:19.382671   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.382801   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:19.383343   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:19.383532   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:26:19.383631   56472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:26:19.383676   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:19.383730   56472 ssh_runner.go:195] Run: cat /version.json
	I0319 20:26:19.383760   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHHostname
	I0319 20:26:19.386580   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.386901   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.386957   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:19.386973   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.387117   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:19.387284   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:19.387353   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:19.387380   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:19.387461   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:19.387623   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHPort
	I0319 20:26:19.387635   56472 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:26:19.387795   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHKeyPath
	I0319 20:26:19.387911   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetSSHUsername
	I0319 20:26:19.388032   56472 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/kubernetes-upgrade-853797/id_rsa Username:docker}
	I0319 20:26:19.486731   56472 ssh_runner.go:195] Run: systemctl --version
	I0319 20:26:19.493481   56472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:26:19.673121   56472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:26:19.679998   56472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:26:19.680070   56472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:26:19.690134   56472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 20:26:19.690154   56472 start.go:494] detecting cgroup driver to use...
	I0319 20:26:19.690218   56472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:26:19.711924   56472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:26:19.727301   56472 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:26:19.727390   56472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:26:19.743470   56472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:26:19.760500   56472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:26:19.933112   56472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:26:20.085268   56472 docker.go:233] disabling docker service ...
	I0319 20:26:20.085341   56472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:26:20.106939   56472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:26:20.121461   56472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:26:20.287026   56472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:26:17.512787   56204 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.576966926s)
	I0319 20:26:17.512822   56204 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:26:17.512819   56204 ssh_runner.go:235] Completed: which crictl: (2.491826884s)
	I0319 20:26:17.512852   56204 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:26:17.512887   56204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:26:17.512891   56204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:26:17.566528   56204 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:26:17.566623   56204 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:26:19.697222   56204 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.130577617s)
	I0319 20:26:19.697233   56204 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.184258668s)
	I0319 20:26:19.697253   56204 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:26:19.697253   56204 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0319 20:26:19.697272   56204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0319 20:26:19.697287   56204 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:26:19.697330   56204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:26:20.466375   56472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:26:20.489459   56472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:26:20.514496   56472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:26:20.514569   56472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.531371   56472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:26:20.531442   56472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.544187   56472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.557254   56472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.572533   56472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:26:20.587367   56472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.602315   56472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.616990   56472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:26:20.630301   56472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:26:20.645323   56472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:26:20.657187   56472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:26:20.831121   56472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:26:22.085698   56204 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.388335341s)
	I0319 20:26:22.085728   56204 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:26:22.085759   56204 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:26:22.085806   56204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:26:24.240732   56204 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.154899807s)
	I0319 20:26:24.240757   56204 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:26:24.240782   56204 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:26:24.240819   56204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:26:26.201539   56472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.370381123s)
	I0319 20:26:26.201582   56472 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:26:26.201624   56472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:26:26.208301   56472 start.go:562] Will wait 60s for crictl version
	I0319 20:26:26.208355   56472 ssh_runner.go:195] Run: which crictl
	I0319 20:26:26.213291   56472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:26:26.254334   56472 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:26:26.254420   56472 ssh_runner.go:195] Run: crio --version
	I0319 20:26:26.300755   56472 ssh_runner.go:195] Run: crio --version
	I0319 20:26:26.345387   56472 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:26:26.346637   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetIP
	I0319 20:26:26.349507   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:26.349917   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:a8:7f", ip: ""} in network mk-kubernetes-upgrade-853797: {Iface:virbr2 ExpiryTime:2024-03-19 21:24:45 +0000 UTC Type:0 Mac:52:54:00:39:a8:7f Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:kubernetes-upgrade-853797 Clientid:01:52:54:00:39:a8:7f}
	I0319 20:26:26.349948   56472 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined IP address 192.168.50.116 and MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:26:26.350180   56472 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:26:26.356971   56472 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:26:26.357080   56472 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:26:26.357149   56472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:26:26.411992   56472 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:26:26.412021   56472 crio.go:433] Images already preloaded, skipping extraction
	I0319 20:26:26.412092   56472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:26:26.463602   56472 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:26:26.463685   56472 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:26:26.463707   56472 kubeadm.go:928] updating node { 192.168.50.116 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:26:26.463917   56472 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-853797 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:26:26.464047   56472 ssh_runner.go:195] Run: crio config
	I0319 20:26:26.533019   56472 cni.go:84] Creating CNI manager for ""
	I0319 20:26:26.533047   56472 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:26:26.533069   56472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:26:26.533095   56472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-853797 NodeName:kubernetes-upgrade-853797 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:26:26.533279   56472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-853797"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:26:26.533342   56472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:26:26.545405   56472 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:26:26.545484   56472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:26:26.557406   56472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0319 20:26:26.580807   56472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:26:26.601601   56472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0319 20:26:26.620892   56472 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I0319 20:26:26.625507   56472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:26:26.784207   56472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:26:26.803504   56472 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797 for IP: 192.168.50.116
	I0319 20:26:26.803529   56472 certs.go:194] generating shared ca certs ...
	I0319 20:26:26.803547   56472 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:26:26.803726   56472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:26:26.803765   56472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:26:26.803777   56472 certs.go:256] generating profile certs ...
	I0319 20:26:26.803889   56472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/client.key
	I0319 20:26:26.803958   56472 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key.d15cc93c
	I0319 20:26:26.804005   56472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.key
	I0319 20:26:26.804137   56472 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:26:26.804218   56472 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:26:26.804235   56472 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:26:26.804295   56472 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:26:26.804327   56472 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:26:26.804353   56472 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:26:26.804395   56472 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:26:26.805030   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:26:26.836514   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:26:26.868233   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:26:26.899390   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:26:26.930060   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0319 20:26:26.961470   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:26:26.989911   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:26:27.024341   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/kubernetes-upgrade-853797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:26:27.057266   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:26:27.086241   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:26:27.113969   56472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:26:27.145706   56472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:26:27.166748   56472 ssh_runner.go:195] Run: openssl version
	I0319 20:26:27.172946   56472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:26:27.184578   56472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:26:27.190218   56472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:26:27.190274   56472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:26:27.196695   56472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:26:27.206989   56472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:26:27.218416   56472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:26:27.224132   56472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:26:27.224187   56472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:26:27.230631   56472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:26:27.242009   56472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:26:27.258476   56472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:26:27.264912   56472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:26:27.264962   56472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:26:27.273269   56472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:26:27.284284   56472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:26:27.289807   56472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:26:27.296484   56472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:26:27.302852   56472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:26:27.309749   56472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:26:27.316896   56472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:26:27.323443   56472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:26:27.329903   56472 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-853797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-853797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:26:27.329978   56472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:26:27.330022   56472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:26:27.376689   56472 cri.go:89] found id: "8c26142f6734ead463e63adc785d5aae60edf97fa7c02937ccf7d0b6ae9c18c6"
	I0319 20:26:27.376712   56472 cri.go:89] found id: "2c4b516611ccbbbf6546debc610a5b8418bb4eaf8b1701814e4e913714c1b3f9"
	I0319 20:26:27.376717   56472 cri.go:89] found id: "fbc420d3183c73690dd22c46914aace22717607b24e04a0208be64b1611451e2"
	I0319 20:26:27.376721   56472 cri.go:89] found id: "dc2b8f64e12c939100a0d3930bc408486fdd4c7f016ca3c365d5bd79099c8cd8"
	I0319 20:26:27.376723   56472 cri.go:89] found id: "a2f357d984af3fa66c19b9ba3ca0f648c939da3da84d779b4912538ec1edeff9"
	I0319 20:26:27.376726   56472 cri.go:89] found id: "38d53846a1f2feeaf95aef12673a1d24dac318679ebb054f5983695e2af7d34f"
	I0319 20:26:27.376729   56472 cri.go:89] found id: "93e923f2a08662be7e9bc45c545ef2fa12e522f259a52ea6097e236e8dd22065"
	I0319 20:26:27.376731   56472 cri.go:89] found id: "8c3e07026a6111b756f926396a062a18cecd326439891f80fac198f6e67f56df"
	I0319 20:26:27.376734   56472 cri.go:89] found id: "ec5559f8a3b4af7c25c47c84fb201693afd4526ca598aeeb3c6cd94ecb8918d0"
	I0319 20:26:27.376743   56472 cri.go:89] found id: ""
	I0319 20:26:27.376779   56472 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.746627511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879998746544250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64fe0d47-7c57-4aa5-88b3-d94f52bcb76c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.747838999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbdaa636-83ba-4e99-8077-d9dfb69afef4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.747926288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbdaa636-83ba-4e99-8077-d9dfb69afef4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.748354410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cf90ccb75e930fb12dad1cd2b8b50b87a4858badeba6c58803a1e6988ea8f3a,PodSandboxId:81ea8122b3a92f35b29ae3ef4ed291dacfb3f8adfb7cb95c52f9f31dafe82b0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995451378553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e70a3aaba46795709ed8705cc6f03367428b684f484fdcb6ac91aaa0a6110a,PodSandboxId:fd7c3309282c127205f3dcc9d3663eadbf1a29a21f074d3186adce91c9db8afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995280918179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a7870c5a1103788e1238b9a95ebf4f2c485a360c16438a5f8c45bc30813463,PodSandboxId:52192eca4700c9eed2160eabdb3078c89bcc94aff0e34fd2352245e322186b00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1710879994954760328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b260731ab2d0d1961e83b3f9cf676564aec76e29630e4707746deeed8cfbc9,PodSandboxId:851e4e336ea5500f451b0bd1492d2a5a86cf3cc895aa8ada0560e5041ca610af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,C
reatedAt:1710879994887419182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0146a2648dc447c0d802ec25437c9e6487af5c5e314dd89bc293e59e581b3581,PodSandboxId:a6ec9fa3d4c4d692716c1c859a66c2048cd69b5db1afe75b38ee2c719abc3d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879990148549712,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7795bc2013b7c5af364d4a375c36622b94cb1830f9e6cd8bd369787eb48f2be,PodSandboxId:249f605d03f0592b28f3fe5ae6099860b4b79394273246841a954914439b63fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710879990115288228,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d3cdd006fbe5eca343c35fea7775be65e2899678cec10d850366883de44f25,PodSandboxId:3dc4eb51b3d85d626b84cad3b476312365e832df6128349a2df4c657562a48b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710879990075413801,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e114e9c688d7fab23f509590e5890c6e0df6480f91ed3af26d3bfbfa96bfd76,PodSandboxId:42d373bfa3f964bfb846d54b4a742281adfda35ed92e62050a20e4421532913e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710879990065315336,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c26142f6734ead463e63adc785d5aae60edf97fa7c02937ccf7d0b6ae9c18c6,PodSandboxId:5c1dfe8f2d732dd490b66143e26f181500e3092968f1fd842ccbe1db206dedec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710879956948618603,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f357d984af3fa66c19b9ba3ca0f648c939da3da84d779b4912538ec1edeff9,PodSandboxId:ead0b84a844613063d0753511b435e1e1ad05ca42770a0daabbdc7edfe05f8ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_EXITED,CreatedAt:1710879925907211055,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c4b516611ccbbbf6546debc610a5b8418bb4eaf8b1701814e4e913714c1b3f9,PodSandboxId:c8f8778928bce7bd15182a77c37410104256ab4c8a4418e2e3188dee3ed2d8a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926118309966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc420d3183c73690dd22c46914aace22717607b24e04a0208be64b1611451e2,PodSandboxId:c84f00d809c6463e5de85d99f07fa588698eacaa81d980f0e80adba5ff25d8ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926083707121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d53846a1f2feeaf95aef12673a1d24dac318679ebb054f5983695e2af7d34f,PodSandboxId:31fb03f46f1d8724ec0c6ac9be837eeb5f1db3c9871f0356efdc7f2f172bd4d9,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1710879906364258143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e923f2a08662be7e9bc45c545ef2fa12e522f259a52ea6097e236e8dd22065,PodSandboxId:422218bc632b8317c2b8deb4d9ca93aefecb22264020b43e15f5680d119f1f57,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1710879906340385793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3e07026a6111b756f926396a062a18cecd326439891f80fac198f6e67f56df,PodSandboxId:93856e747c788b5fc11432d5fc4459053f786a81555133771fa30eb5c05a9fe8,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879906322892627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5559f8a3b4af7c25c47c84fb201693afd4526ca598aeeb3c6cd94ecb8918d0,PodSandboxId:4d65e0930fc035b8b3e30442c58611d20089ce81e94502f3b6afe03672088bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1710879906315275000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbdaa636-83ba-4e99-8077-d9dfb69afef4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.795093284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a50a61c9-6459-42d3-b451-efa808fefd7d name=/runtime.v1.RuntimeService/Version
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.795189257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a50a61c9-6459-42d3-b451-efa808fefd7d name=/runtime.v1.RuntimeService/Version
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.796784411Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d59f3000-3bad-4bae-827a-c140461ec64e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.797252767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879998797223972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d59f3000-3bad-4bae-827a-c140461ec64e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.798212912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4877818d-8b0b-4c82-a10e-4a93c0d57bdb name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.798297193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4877818d-8b0b-4c82-a10e-4a93c0d57bdb name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.798794677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cf90ccb75e930fb12dad1cd2b8b50b87a4858badeba6c58803a1e6988ea8f3a,PodSandboxId:81ea8122b3a92f35b29ae3ef4ed291dacfb3f8adfb7cb95c52f9f31dafe82b0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995451378553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e70a3aaba46795709ed8705cc6f03367428b684f484fdcb6ac91aaa0a6110a,PodSandboxId:fd7c3309282c127205f3dcc9d3663eadbf1a29a21f074d3186adce91c9db8afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995280918179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a7870c5a1103788e1238b9a95ebf4f2c485a360c16438a5f8c45bc30813463,PodSandboxId:52192eca4700c9eed2160eabdb3078c89bcc94aff0e34fd2352245e322186b00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1710879994954760328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b260731ab2d0d1961e83b3f9cf676564aec76e29630e4707746deeed8cfbc9,PodSandboxId:851e4e336ea5500f451b0bd1492d2a5a86cf3cc895aa8ada0560e5041ca610af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,C
reatedAt:1710879994887419182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0146a2648dc447c0d802ec25437c9e6487af5c5e314dd89bc293e59e581b3581,PodSandboxId:a6ec9fa3d4c4d692716c1c859a66c2048cd69b5db1afe75b38ee2c719abc3d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879990148549712,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7795bc2013b7c5af364d4a375c36622b94cb1830f9e6cd8bd369787eb48f2be,PodSandboxId:249f605d03f0592b28f3fe5ae6099860b4b79394273246841a954914439b63fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710879990115288228,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d3cdd006fbe5eca343c35fea7775be65e2899678cec10d850366883de44f25,PodSandboxId:3dc4eb51b3d85d626b84cad3b476312365e832df6128349a2df4c657562a48b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710879990075413801,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e114e9c688d7fab23f509590e5890c6e0df6480f91ed3af26d3bfbfa96bfd76,PodSandboxId:42d373bfa3f964bfb846d54b4a742281adfda35ed92e62050a20e4421532913e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710879990065315336,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c26142f6734ead463e63adc785d5aae60edf97fa7c02937ccf7d0b6ae9c18c6,PodSandboxId:5c1dfe8f2d732dd490b66143e26f181500e3092968f1fd842ccbe1db206dedec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710879956948618603,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f357d984af3fa66c19b9ba3ca0f648c939da3da84d779b4912538ec1edeff9,PodSandboxId:ead0b84a844613063d0753511b435e1e1ad05ca42770a0daabbdc7edfe05f8ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_EXITED,CreatedAt:1710879925907211055,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c4b516611ccbbbf6546debc610a5b8418bb4eaf8b1701814e4e913714c1b3f9,PodSandboxId:c8f8778928bce7bd15182a77c37410104256ab4c8a4418e2e3188dee3ed2d8a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926118309966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc420d3183c73690dd22c46914aace22717607b24e04a0208be64b1611451e2,PodSandboxId:c84f00d809c6463e5de85d99f07fa588698eacaa81d980f0e80adba5ff25d8ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926083707121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d53846a1f2feeaf95aef12673a1d24dac318679ebb054f5983695e2af7d34f,PodSandboxId:31fb03f46f1d8724ec0c6ac9be837eeb5f1db3c9871f0356efdc7f2f172bd4d9,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1710879906364258143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e923f2a08662be7e9bc45c545ef2fa12e522f259a52ea6097e236e8dd22065,PodSandboxId:422218bc632b8317c2b8deb4d9ca93aefecb22264020b43e15f5680d119f1f57,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1710879906340385793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3e07026a6111b756f926396a062a18cecd326439891f80fac198f6e67f56df,PodSandboxId:93856e747c788b5fc11432d5fc4459053f786a81555133771fa30eb5c05a9fe8,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879906322892627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5559f8a3b4af7c25c47c84fb201693afd4526ca598aeeb3c6cd94ecb8918d0,PodSandboxId:4d65e0930fc035b8b3e30442c58611d20089ce81e94502f3b6afe03672088bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1710879906315275000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4877818d-8b0b-4c82-a10e-4a93c0d57bdb name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.850452531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b263a034-0d5f-40c1-b74b-9d5b327c2c53 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.850555046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b263a034-0d5f-40c1-b74b-9d5b327c2c53 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.852104173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72064827-4319-4e1f-8d7c-c8e50c6ea2d3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.852484947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879998852462188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72064827-4319-4e1f-8d7c-c8e50c6ea2d3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.853193460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ac550f2-fa4e-4e97-8904-8301e240d1b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.853275703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ac550f2-fa4e-4e97-8904-8301e240d1b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.853647740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cf90ccb75e930fb12dad1cd2b8b50b87a4858badeba6c58803a1e6988ea8f3a,PodSandboxId:81ea8122b3a92f35b29ae3ef4ed291dacfb3f8adfb7cb95c52f9f31dafe82b0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995451378553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e70a3aaba46795709ed8705cc6f03367428b684f484fdcb6ac91aaa0a6110a,PodSandboxId:fd7c3309282c127205f3dcc9d3663eadbf1a29a21f074d3186adce91c9db8afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995280918179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a7870c5a1103788e1238b9a95ebf4f2c485a360c16438a5f8c45bc30813463,PodSandboxId:52192eca4700c9eed2160eabdb3078c89bcc94aff0e34fd2352245e322186b00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1710879994954760328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b260731ab2d0d1961e83b3f9cf676564aec76e29630e4707746deeed8cfbc9,PodSandboxId:851e4e336ea5500f451b0bd1492d2a5a86cf3cc895aa8ada0560e5041ca610af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,C
reatedAt:1710879994887419182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0146a2648dc447c0d802ec25437c9e6487af5c5e314dd89bc293e59e581b3581,PodSandboxId:a6ec9fa3d4c4d692716c1c859a66c2048cd69b5db1afe75b38ee2c719abc3d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879990148549712,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7795bc2013b7c5af364d4a375c36622b94cb1830f9e6cd8bd369787eb48f2be,PodSandboxId:249f605d03f0592b28f3fe5ae6099860b4b79394273246841a954914439b63fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710879990115288228,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d3cdd006fbe5eca343c35fea7775be65e2899678cec10d850366883de44f25,PodSandboxId:3dc4eb51b3d85d626b84cad3b476312365e832df6128349a2df4c657562a48b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710879990075413801,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e114e9c688d7fab23f509590e5890c6e0df6480f91ed3af26d3bfbfa96bfd76,PodSandboxId:42d373bfa3f964bfb846d54b4a742281adfda35ed92e62050a20e4421532913e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710879990065315336,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c26142f6734ead463e63adc785d5aae60edf97fa7c02937ccf7d0b6ae9c18c6,PodSandboxId:5c1dfe8f2d732dd490b66143e26f181500e3092968f1fd842ccbe1db206dedec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710879956948618603,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f357d984af3fa66c19b9ba3ca0f648c939da3da84d779b4912538ec1edeff9,PodSandboxId:ead0b84a844613063d0753511b435e1e1ad05ca42770a0daabbdc7edfe05f8ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_EXITED,CreatedAt:1710879925907211055,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c4b516611ccbbbf6546debc610a5b8418bb4eaf8b1701814e4e913714c1b3f9,PodSandboxId:c8f8778928bce7bd15182a77c37410104256ab4c8a4418e2e3188dee3ed2d8a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926118309966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc420d3183c73690dd22c46914aace22717607b24e04a0208be64b1611451e2,PodSandboxId:c84f00d809c6463e5de85d99f07fa588698eacaa81d980f0e80adba5ff25d8ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926083707121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d53846a1f2feeaf95aef12673a1d24dac318679ebb054f5983695e2af7d34f,PodSandboxId:31fb03f46f1d8724ec0c6ac9be837eeb5f1db3c9871f0356efdc7f2f172bd4d9,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1710879906364258143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e923f2a08662be7e9bc45c545ef2fa12e522f259a52ea6097e236e8dd22065,PodSandboxId:422218bc632b8317c2b8deb4d9ca93aefecb22264020b43e15f5680d119f1f57,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1710879906340385793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3e07026a6111b756f926396a062a18cecd326439891f80fac198f6e67f56df,PodSandboxId:93856e747c788b5fc11432d5fc4459053f786a81555133771fa30eb5c05a9fe8,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879906322892627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5559f8a3b4af7c25c47c84fb201693afd4526ca598aeeb3c6cd94ecb8918d0,PodSandboxId:4d65e0930fc035b8b3e30442c58611d20089ce81e94502f3b6afe03672088bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1710879906315275000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ac550f2-fa4e-4e97-8904-8301e240d1b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.895219149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b593b0f-6db4-401f-8ffe-a0a33c9925e3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.895320075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b593b0f-6db4-401f-8ffe-a0a33c9925e3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.897091697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fca865d-0f5f-46d1-8266-7833e7e917f5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.897472790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879998897448394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fca865d-0f5f-46d1-8266-7833e7e917f5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.898529761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3750ab8-5d82-4437-9805-5d8b924ab769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.898611125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3750ab8-5d82-4437-9805-5d8b924ab769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:26:38 kubernetes-upgrade-853797 crio[2182]: time="2024-03-19 20:26:38.899067697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cf90ccb75e930fb12dad1cd2b8b50b87a4858badeba6c58803a1e6988ea8f3a,PodSandboxId:81ea8122b3a92f35b29ae3ef4ed291dacfb3f8adfb7cb95c52f9f31dafe82b0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995451378553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e70a3aaba46795709ed8705cc6f03367428b684f484fdcb6ac91aaa0a6110a,PodSandboxId:fd7c3309282c127205f3dcc9d3663eadbf1a29a21f074d3186adce91c9db8afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879995280918179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a7870c5a1103788e1238b9a95ebf4f2c485a360c16438a5f8c45bc30813463,PodSandboxId:52192eca4700c9eed2160eabdb3078c89bcc94aff0e34fd2352245e322186b00,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1710879994954760328,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b260731ab2d0d1961e83b3f9cf676564aec76e29630e4707746deeed8cfbc9,PodSandboxId:851e4e336ea5500f451b0bd1492d2a5a86cf3cc895aa8ada0560e5041ca610af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,C
reatedAt:1710879994887419182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0146a2648dc447c0d802ec25437c9e6487af5c5e314dd89bc293e59e581b3581,PodSandboxId:a6ec9fa3d4c4d692716c1c859a66c2048cd69b5db1afe75b38ee2c719abc3d0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879990148549712,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7795bc2013b7c5af364d4a375c36622b94cb1830f9e6cd8bd369787eb48f2be,PodSandboxId:249f605d03f0592b28f3fe5ae6099860b4b79394273246841a954914439b63fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710879990115288228,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d3cdd006fbe5eca343c35fea7775be65e2899678cec10d850366883de44f25,PodSandboxId:3dc4eb51b3d85d626b84cad3b476312365e832df6128349a2df4c657562a48b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710879990075413801,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e114e9c688d7fab23f509590e5890c6e0df6480f91ed3af26d3bfbfa96bfd76,PodSandboxId:42d373bfa3f964bfb846d54b4a742281adfda35ed92e62050a20e4421532913e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710879990065315336,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c26142f6734ead463e63adc785d5aae60edf97fa7c02937ccf7d0b6ae9c18c6,PodSandboxId:5c1dfe8f2d732dd490b66143e26f181500e3092968f1fd842ccbe1db206dedec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710879956948618603,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb387d14-5f53-417d-aaaa-9f6e9ca8174f,},Annotations:map[string]string{io.kubernetes.container.hash: 8e989f51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f357d984af3fa66c19b9ba3ca0f648c939da3da84d779b4912538ec1edeff9,PodSandboxId:ead0b84a844613063d0753511b435e1e1ad05ca42770a0daabbdc7edfe05f8ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_EXITED,CreatedAt:1710879925907211055,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xz6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac15bb1-c2af-432c-b9fc-d76ea75b2626,},Annotations:map[string]string{io.kubernetes.container.hash: b52675a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c4b516611ccbbbf6546debc610a5b8418bb4eaf8b1701814e4e913714c1b3f9,PodSandboxId:c8f8778928bce7bd15182a77c37410104256ab4c8a4418e2e3188dee3ed2d8a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926118309966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-jvxlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04e2f3a-f543-4abc-94d5-245cadb7b68e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5afedf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbc420d3183c73690dd22c46914aace22717607b24e04a0208be64b1611451e2,PodSandboxId:c84f00d809c6463e5de85d99f07fa588698eacaa81d980f0e80adba5ff25d8ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879926083707121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-228mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a401ff-d047-438c-955f-f771783c358a,},Annotations:map[string]string{io.kubernetes.container.hash: d5e06826,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d53846a1f2feeaf95aef12673a1d24dac318679ebb054f5983695e2af7d34f,PodSandboxId:31fb03f46f1d8724ec0c6ac9be837eeb5f1db3c9871f0356efdc7f2f172bd4d9,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1710879906364258143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561d6e857886e605c04cf616a012ddae,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e923f2a08662be7e9bc45c545ef2fa12e522f259a52ea6097e236e8dd22065,PodSandboxId:422218bc632b8317c2b8deb4d9ca93aefecb22264020b43e15f5680d119f1f57,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1710879906340385793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ca2c0cf93aae14b171d2ad94491a30,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3e07026a6111b756f926396a062a18cecd326439891f80fac198f6e67f56df,PodSandboxId:93856e747c788b5fc11432d5fc4459053f786a81555133771fa30eb5c05a9fe8,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879906322892627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cc6f89e0a8c402d6af0f881a1ba870,},Annotations:map[string]string{io.kubernetes.container.hash: 2c794fee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5559f8a3b4af7c25c47c84fb201693afd4526ca598aeeb3c6cd94ecb8918d0,PodSandboxId:4d65e0930fc035b8b3e30442c58611d20089ce81e94502f3b6afe03672088bb5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1710879906315275000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-853797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b068901aab9fb26c84065d1bc7eee5a8,},Annotations:map[string]string{io.kubernetes.container.hash: bbd970d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3750ab8-5d82-4437-9805-5d8b924ab769 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1cf90ccb75e93       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   81ea8122b3a92       coredns-7db6d8ff4d-jvxlw
	a4e70a3aaba46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   fd7c3309282c1       coredns-7db6d8ff4d-228mb
	93a7870c5a110       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago        Running             storage-provisioner       2                   52192eca4700c       storage-provisioner
	22b260731ab2d       3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8   4 seconds ago        Running             kube-proxy                1                   851e4e336ea55       kube-proxy-xz6f8
	0146a2648dc44       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago        Running             etcd                      1                   a6ec9fa3d4c4d       etcd-kubernetes-upgrade-853797
	d7795bc2013b7       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   8 seconds ago        Running             kube-apiserver            1                   249f605d03f05       kube-apiserver-kubernetes-upgrade-853797
	66d3cdd006fbe       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   8 seconds ago        Running             kube-scheduler            1                   3dc4eb51b3d85       kube-scheduler-kubernetes-upgrade-853797
	6e114e9c688d7       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   8 seconds ago        Running             kube-controller-manager   1                   42d373bfa3f96       kube-controller-manager-kubernetes-upgrade-853797
	8c26142f6734e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   42 seconds ago       Exited              storage-provisioner       1                   5c1dfe8f2d732       storage-provisioner
	2c4b516611ccb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   c8f8778928bce       coredns-7db6d8ff4d-jvxlw
	fbc420d3183c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   c84f00d809c64       coredns-7db6d8ff4d-228mb
	a2f357d984af3       3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8   About a minute ago   Exited              kube-proxy                0                   ead0b84a84461       kube-proxy-xz6f8
	38d53846a1f2f       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   About a minute ago   Exited              kube-controller-manager   0                   31fb03f46f1d8       kube-controller-manager-kubernetes-upgrade-853797
	93e923f2a0866       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   About a minute ago   Exited              kube-scheduler            0                   422218bc632b8       kube-scheduler-kubernetes-upgrade-853797
	8c3e07026a611       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   93856e747c788       etcd-kubernetes-upgrade-853797
	ec5559f8a3b4a       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   About a minute ago   Exited              kube-apiserver            0                   4d65e0930fc03       kube-apiserver-kubernetes-upgrade-853797
	
	
	==> coredns [1cf90ccb75e930fb12dad1cd2b8b50b87a4858badeba6c58803a1e6988ea8f3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [2c4b516611ccbbbf6546debc610a5b8418bb4eaf8b1701814e4e913714c1b3f9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[273112500]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Mar-2024 20:25:26.415) (total time: 30001ms):
	Trace[273112500]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:25:56.415)
	Trace[273112500]: [30.001177407s] [30.001177407s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1401336026]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Mar-2024 20:25:26.416) (total time: 30001ms):
	Trace[1401336026]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:25:56.417)
	Trace[1401336026]: [30.00124727s] [30.00124727s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[843517996]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Mar-2024 20:25:26.416) (total time: 30000ms):
	Trace[843517996]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:25:56.417)
	Trace[843517996]: [30.000977819s] [30.000977819s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4e70a3aaba46795709ed8705cc6f03367428b684f484fdcb6ac91aaa0a6110a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fbc420d3183c73690dd22c46914aace22717607b24e04a0208be64b1611451e2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1251549298]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Mar-2024 20:25:26.389) (total time: 30001ms):
	Trace[1251549298]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:25:56.390)
	Trace[1251549298]: [30.001824621s] [30.001824621s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[356068502]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Mar-2024 20:25:26.389) (total time: 30001ms):
	Trace[356068502]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:25:56.390)
	Trace[356068502]: [30.001777862s] [30.001777862s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1663068490]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Mar-2024 20:25:26.389) (total time: 30002ms):
	Trace[1663068490]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:25:56.390)
	Trace[1663068490]: [30.002137996s] [30.002137996s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-853797
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-853797
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:25:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-853797
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:26:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:26:34 +0000   Tue, 19 Mar 2024 20:25:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:26:34 +0000   Tue, 19 Mar 2024 20:25:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:26:34 +0000   Tue, 19 Mar 2024 20:25:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:26:34 +0000   Tue, 19 Mar 2024 20:25:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.116
	  Hostname:    kubernetes-upgrade-853797
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be4b095712e24eacad693b92bca79806
	  System UUID:                be4b0957-12e2-4eac-ad69-3b92bca79806
	  Boot ID:                    1449108d-0b10-4714-b970-b2df420f2886
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-beta.0
	  Kube-Proxy Version:         v1.30.0-beta.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-228mb                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     74s
	  kube-system                 coredns-7db6d8ff4d-jvxlw                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     74s
	  kube-system                 etcd-kubernetes-upgrade-853797                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-kubernetes-upgrade-853797             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-853797    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-xz6f8                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-scheduler-kubernetes-upgrade-853797             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    94s (x8 over 94s)  kubelet          Node kubernetes-upgrade-853797 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x7 over 94s)  kubelet          Node kubernetes-upgrade-853797 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s (x8 over 94s)  kubelet          Node kubernetes-upgrade-853797 status is now: NodeHasSufficientMemory
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           75s                node-controller  Node kubernetes-upgrade-853797 event: Registered Node kubernetes-upgrade-853797 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-853797 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-853797 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-853797 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.855346] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.086364] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082514] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.189959] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.150078] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.313058] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[Mar19 20:25] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +0.067545] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.561290] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +9.096355] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	[  +0.085674] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.063199] kauditd_printk_skb: 21 callbacks suppressed
	[ +24.759714] hrtimer: interrupt took 4474694 ns
	[  +6.634203] kauditd_printk_skb: 68 callbacks suppressed
	[Mar19 20:26] systemd-fstab-generator[2100]: Ignoring "noauto" option for root device
	[  +0.159344] systemd-fstab-generator[2112]: Ignoring "noauto" option for root device
	[  +0.181434] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[  +0.183104] systemd-fstab-generator[2139]: Ignoring "noauto" option for root device
	[  +0.368229] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +5.953995] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.085509] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.244101] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +5.585174] kauditd_printk_skb: 75 callbacks suppressed
	[  +1.867994] systemd-fstab-generator[3205]: Ignoring "noauto" option for root device
	
	
	==> etcd [0146a2648dc447c0d802ec25437c9e6487af5c5e314dd89bc293e59e581b3581] <==
	{"level":"info","ts":"2024-03-19T20:26:31.217244Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"938c7bbb9c530c74","local-member-id":"70e810c2542c58a7","added-peer-id":"70e810c2542c58a7","added-peer-peer-urls":["https://192.168.50.116:2380"]}
	{"level":"info","ts":"2024-03-19T20:26:31.217505Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"938c7bbb9c530c74","local-member-id":"70e810c2542c58a7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:26:31.217575Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:26:31.218871Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-19T20:26:31.223229Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"70e810c2542c58a7","initial-advertise-peer-urls":["https://192.168.50.116:2380"],"listen-peer-urls":["https://192.168.50.116:2380"],"advertise-client-urls":["https://192.168.50.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-19T20:26:31.22331Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:26:31.22109Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.116:2380"}
	{"level":"info","ts":"2024-03-19T20:26:31.223411Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.116:2380"}
	{"level":"info","ts":"2024-03-19T20:26:32.220688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-19T20:26:32.220752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:26:32.220788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 received MsgPreVoteResp from 70e810c2542c58a7 at term 2"}
	{"level":"info","ts":"2024-03-19T20:26:32.220801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T20:26:32.220806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 received MsgVoteResp from 70e810c2542c58a7 at term 3"}
	{"level":"info","ts":"2024-03-19T20:26:32.220823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70e810c2542c58a7 became leader at term 3"}
	{"level":"info","ts":"2024-03-19T20:26:32.22083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 70e810c2542c58a7 elected leader 70e810c2542c58a7 at term 3"}
	{"level":"info","ts":"2024-03-19T20:26:32.228133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:26:32.228413Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:26:32.228133Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"70e810c2542c58a7","local-member-attributes":"{Name:kubernetes-upgrade-853797 ClientURLs:[https://192.168.50.116:2379]}","request-path":"/0/members/70e810c2542c58a7/attributes","cluster-id":"938c7bbb9c530c74","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:26:32.229228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:26:32.229304Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:26:32.230903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.116:2379"}
	{"level":"info","ts":"2024-03-19T20:26:32.232558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-03-19T20:26:37.818522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.958294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4091"}
	{"level":"info","ts":"2024-03-19T20:26:37.818688Z","caller":"traceutil/trace.go:171","msg":"trace[1679958558] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:509; }","duration":"376.159884ms","start":"2024-03-19T20:26:37.442512Z","end":"2024-03-19T20:26:37.818672Z","steps":["trace[1679958558] 'range keys from in-memory index tree'  (duration: 375.867027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:26:37.818743Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:26:37.442496Z","time spent":"376.228777ms","remote":"127.0.0.1:39040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":4114,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	
	
	==> etcd [8c3e07026a6111b756f926396a062a18cecd326439891f80fac198f6e67f56df] <==
	{"level":"info","ts":"2024-03-19T20:25:30.125381Z","caller":"traceutil/trace.go:171","msg":"trace[1020109513] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"551.860508ms","start":"2024-03-19T20:25:29.573506Z","end":"2024-03-19T20:25:30.125366Z","steps":["trace[1020109513] 'process raft request'  (duration: 321.71641ms)","trace[1020109513] 'compare'  (duration: 229.046735ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:25:30.125481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:25:29.573491Z","time spent":"551.953932ms","remote":"127.0.0.1:49170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4602,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/kubernetes-upgrade-853797\" mod_revision:317 > success:<request_put:<key:\"/registry/minions/kubernetes-upgrade-853797\" value_size:4551 >> failure:<request_range:<key:\"/registry/minions/kubernetes-upgrade-853797\" > >"}
	{"level":"info","ts":"2024-03-19T20:25:30.12542Z","caller":"traceutil/trace.go:171","msg":"trace[722631869] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"349.944318ms","start":"2024-03-19T20:25:29.775464Z","end":"2024-03-19T20:25:30.125408Z","steps":["trace[722631869] 'process raft request'  (duration: 349.891541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:25:30.125676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:25:29.775453Z","time spent":"350.182364ms","remote":"127.0.0.1:49256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1324,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-9wzwt\" mod_revision:346 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-9wzwt\" value_size:1265 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-9wzwt\" > >"}
	{"level":"info","ts":"2024-03-19T20:25:30.125893Z","caller":"traceutil/trace.go:171","msg":"trace[1793953633] linearizableReadLoop","detail":"{readStateIndex:391; appliedIndex:390; }","duration":"352.776417ms","start":"2024-03-19T20:25:29.773103Z","end":"2024-03-19T20:25:30.12588Z","steps":["trace[1793953633] 'read index received'  (duration: 122.128579ms)","trace[1793953633] 'applied index is now lower than readState.Index'  (duration: 230.646813ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T20:25:30.126482Z","caller":"traceutil/trace.go:171","msg":"trace[1380236131] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"351.279258ms","start":"2024-03-19T20:25:29.775193Z","end":"2024-03-19T20:25:30.126472Z","steps":["trace[1380236131] 'process raft request'  (duration: 350.098654ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:25:30.126957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:25:29.775175Z","time spent":"351.747054ms","remote":"127.0.0.1:49162","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":799,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:344 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:742 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2024-03-19T20:25:30.126755Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.645862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-228mb\" ","response":"range_response_count:1 size:4680"}
	{"level":"info","ts":"2024-03-19T20:25:30.127214Z","caller":"traceutil/trace.go:171","msg":"trace[240795702] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-228mb; range_end:; response_count:1; response_revision:384; }","duration":"354.121507ms","start":"2024-03-19T20:25:29.773082Z","end":"2024-03-19T20:25:30.127204Z","steps":["trace[240795702] 'agreement among raft nodes before linearized reading'  (duration: 353.640332ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:25:30.127266Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:25:29.773072Z","time spent":"354.184635ms","remote":"127.0.0.1:49182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4703,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-228mb\" "}
	{"level":"warn","ts":"2024-03-19T20:25:51.194037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.649043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6388231106727747776 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.116\" mod_revision:393 > success:<request_put:<key:\"/registry/masterleases/192.168.50.116\" value_size:67 lease:6388231106727747774 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.116\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-19T20:25:51.194395Z","caller":"traceutil/trace.go:171","msg":"trace[1892574763] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"198.727999ms","start":"2024-03-19T20:25:50.995636Z","end":"2024-03-19T20:25:51.194364Z","steps":["trace[1892574763] 'process raft request'  (duration: 64.511412ms)","trace[1892574763] 'compare'  (duration: 133.558915ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:25:56.017627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.978426ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6388231106727747790 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f\" mod_revision:396 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f\" value_size:708 lease:6388231106727747411 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-19T20:25:56.017959Z","caller":"traceutil/trace.go:171","msg":"trace[1490294482] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"356.41712ms","start":"2024-03-19T20:25:55.661512Z","end":"2024-03-19T20:25:56.017929Z","steps":["trace[1490294482] 'process raft request'  (duration: 220.011864ms)","trace[1490294482] 'compare'  (duration: 135.889527ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:25:56.018206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:25:55.661501Z","time spent":"356.619491ms","remote":"127.0.0.1:49052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":796,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f\" mod_revision:396 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f\" value_size:708 lease:6388231106727747411 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f\" > >"}
	{"level":"info","ts":"2024-03-19T20:26:13.092143Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-19T20:26:13.092273Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-853797","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.116:2380"],"advertise-client-urls":["https://192.168.50.116:2379"]}
	{"level":"warn","ts":"2024-03-19T20:26:13.092398Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.116:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:26:13.092434Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.116:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:26:13.102949Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-19T20:26:13.107949Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-19T20:26:13.174938Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"70e810c2542c58a7","current-leader-member-id":"70e810c2542c58a7"}
	{"level":"info","ts":"2024-03-19T20:26:13.177692Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.116:2380"}
	{"level":"info","ts":"2024-03-19T20:26:13.17789Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.116:2380"}
	{"level":"info","ts":"2024-03-19T20:26:13.177937Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-853797","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.116:2380"],"advertise-client-urls":["https://192.168.50.116:2379"]}
	
	
	==> kernel <==
	 20:26:39 up 2 min,  0 users,  load average: 1.05, 0.49, 0.19
	Linux kubernetes-upgrade-853797 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d7795bc2013b7c5af364d4a375c36622b94cb1830f9e6cd8bd369787eb48f2be] <==
	I0319 20:26:33.746406       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0319 20:26:33.840448       1 shared_informer.go:320] Caches are synced for configmaps
	I0319 20:26:33.841021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0319 20:26:33.846349       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0319 20:26:33.856067       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0319 20:26:33.860614       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0319 20:26:33.860738       1 aggregator.go:165] initial CRD sync complete...
	I0319 20:26:33.860772       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 20:26:33.860783       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 20:26:33.860789       1 cache.go:39] Caches are synced for autoregister controller
	I0319 20:26:33.907412       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0319 20:26:33.908647       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0319 20:26:33.908693       1 policy_source.go:224] refreshing policies
	I0319 20:26:33.940676       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 20:26:33.940774       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0319 20:26:33.941454       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 20:26:33.946416       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0319 20:26:33.959089       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 20:26:34.758905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 20:26:36.148503       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0319 20:26:36.171635       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0319 20:26:36.213678       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0319 20:26:36.316795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 20:26:36.324845       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 20:26:37.338890       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ec5559f8a3b4af7c25c47c84fb201693afd4526ca598aeeb3c6cd94ecb8918d0] <==
	Trace[1698765041]: [588.99954ms] [588.99954ms] END
	I0319 20:25:27.930028       1 trace.go:236] Trace[262661803]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7136df03-681c-4338-8e2a-c71f5c1f29e9,client:192.168.50.116,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/634fc1b,verb:POST (19-Mar-2024 20:25:27.393) (total time: 536ms):
	Trace[262661803]: ["Create etcd3" audit-id:7136df03-681c-4338-8e2a-c71f5c1f29e9,key:/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f,type:*core.Event,resource:events 535ms (20:25:27.394)
	Trace[262661803]:  ---"Txn call succeeded" 535ms (20:25:27.929)]
	Trace[262661803]: [536.673376ms] [536.673376ms] END
	I0319 20:25:27.932752       1 trace.go:236] Trace[1479126939]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:de343a95-7e26-4c2a-a896-63b1b40c7ef8,client:192.168.50.116,api-group:,api-version:v1,name:kube-proxy-xz6f8,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-xz6f8,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/634fc1b,verb:GET (19-Mar-2024 20:25:27.398) (total time: 534ms):
	Trace[1479126939]: ---"About to write a response" 532ms (20:25:27.930)
	Trace[1479126939]: [534.131988ms] [534.131988ms] END
	I0319 20:25:29.047296       1 trace.go:236] Trace[1428823719]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:91fcad91-b2a9-44ba-9a80-dbf01e0c06a6,client:192.168.50.116,api-group:,api-version:v1,name:coredns-7db6d8ff4d-228mb.17be443a91f1335f,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events/coredns-7db6d8ff4d-228mb.17be443a91f1335f,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/634fc1b,verb:PATCH (19-Mar-2024 20:25:27.934) (total time: 1113ms):
	Trace[1428823719]: ["GuaranteedUpdate etcd3" audit-id:91fcad91-b2a9-44ba-9a80-dbf01e0c06a6,key:/events/kube-system/coredns-7db6d8ff4d-228mb.17be443a91f1335f,type:*core.Event,resource:events 1112ms (20:25:27.934)
	Trace[1428823719]:  ---"Txn call completed" 1110ms (20:25:29.047)]
	Trace[1428823719]: ---"Object stored in database" 1111ms (20:25:29.047)
	Trace[1428823719]: [1.113066612s] [1.113066612s] END
	I0319 20:25:29.338488       1 trace.go:236] Trace[257297562]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9e9ef513-46bf-4507-8517-2d418daae435,client:192.168.50.116,api-group:,api-version:v1,name:kube-proxy-xz6f8,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-xz6f8/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/634fc1b,verb:PATCH (19-Mar-2024 20:25:27.934) (total time: 1403ms):
	Trace[257297562]: ["GuaranteedUpdate etcd3" audit-id:9e9ef513-46bf-4507-8517-2d418daae435,key:/pods/kube-system/kube-proxy-xz6f8,type:*core.Pod,resource:pods 1403ms (20:25:27.934)
	Trace[257297562]:  ---"Txn call completed" 1393ms (20:25:29.332)]
	Trace[257297562]: ---"Object stored in database" 1394ms (20:25:29.333)
	Trace[257297562]: [1.403883928s] [1.403883928s] END
	I0319 20:25:30.130725       1 trace.go:236] Trace[994756118]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:454231dc-1055-4f04-b647-0be3862a669a,client:192.168.50.116,api-group:,api-version:v1,name:kubernetes-upgrade-853797,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/kubernetes-upgrade-853797/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/634fc1b,verb:PATCH (19-Mar-2024 20:25:29.571) (total time: 559ms):
	Trace[994756118]: ["GuaranteedUpdate etcd3" audit-id:454231dc-1055-4f04-b647-0be3862a669a,key:/minions/kubernetes-upgrade-853797,type:*core.Node,resource:nodes 558ms (20:25:29.571)
	Trace[994756118]:  ---"Txn call completed" 554ms (20:25:30.128)]
	Trace[994756118]: ---"Object stored in database" 555ms (20:25:30.128)
	Trace[994756118]: [559.058979ms] [559.058979ms] END
	I0319 20:26:13.084038       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0319 20:26:13.119487       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [38d53846a1f2feeaf95aef12673a1d24dac318679ebb054f5983695e2af7d34f] <==
	I0319 20:25:24.455940       1 shared_informer.go:320] Caches are synced for persistent volume
	I0319 20:25:24.469339       1 shared_informer.go:320] Caches are synced for attach detach
	I0319 20:25:24.487260       1 shared_informer.go:320] Caches are synced for PVC protection
	I0319 20:25:24.487281       1 shared_informer.go:320] Caches are synced for expand
	I0319 20:25:24.503068       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0319 20:25:24.522703       1 shared_informer.go:320] Caches are synced for crt configmap
	I0319 20:25:24.523118       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0319 20:25:24.525347       1 shared_informer.go:320] Caches are synced for endpoint
	I0319 20:25:24.546169       1 shared_informer.go:320] Caches are synced for resource quota
	I0319 20:25:24.575618       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0319 20:25:24.579778       1 shared_informer.go:320] Caches are synced for resource quota
	I0319 20:25:24.948071       1 shared_informer.go:320] Caches are synced for garbage collector
	I0319 20:25:24.948174       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0319 20:25:24.982032       1 shared_informer.go:320] Caches are synced for garbage collector
	I0319 20:25:25.281889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="591.142551ms"
	I0319 20:25:25.312573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.604072ms"
	I0319 20:25:25.312694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.76µs"
	I0319 20:25:25.312761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.366µs"
	I0319 20:25:25.321041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.994µs"
	I0319 20:25:29.771912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.045µs"
	I0319 20:25:30.150306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.001µs"
	I0319 20:26:05.591847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.620964ms"
	I0319 20:26:05.592086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="174.285µs"
	I0319 20:26:05.626722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.315382ms"
	I0319 20:26:05.628232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.002µs"
	
	
	==> kube-controller-manager [6e114e9c688d7fab23f509590e5890c6e0df6480f91ed3af26d3bfbfa96bfd76] <==
	I0319 20:26:31.176467       1 serving.go:380] Generated self-signed cert in-memory
	I0319 20:26:31.868853       1 controllermanager.go:189] "Starting" version="v1.30.0-beta.0"
	I0319 20:26:31.868939       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:26:31.870624       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0319 20:26:31.870745       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0319 20:26:31.871337       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0319 20:26:31.871441       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 20:26:35.810085       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0319 20:26:35.809961       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0319 20:26:35.811088       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0319 20:26:35.817584       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0319 20:26:35.817761       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0319 20:26:35.818236       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0319 20:26:35.818944       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0319 20:26:35.822495       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0319 20:26:35.822673       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0319 20:26:35.823897       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0319 20:26:35.825475       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0319 20:26:35.825638       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0319 20:26:35.826339       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0319 20:26:35.828597       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0319 20:26:35.828790       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0319 20:26:35.829501       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0319 20:26:35.911047       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [22b260731ab2d0d1961e83b3f9cf676564aec76e29630e4707746deeed8cfbc9] <==
	I0319 20:26:35.351369       1 server_linux.go:69] "Using iptables proxy"
	I0319 20:26:35.385455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.116"]
	I0319 20:26:35.541327       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0319 20:26:35.541404       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:26:35.541424       1 server_linux.go:165] "Using iptables Proxier"
	I0319 20:26:35.554270       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:26:35.554647       1 server.go:872] "Version info" version="v1.30.0-beta.0"
	I0319 20:26:35.554698       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:26:35.584496       1 config.go:101] "Starting endpoint slice config controller"
	I0319 20:26:35.615881       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0319 20:26:35.615935       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0319 20:26:35.586264       1 config.go:192] "Starting service config controller"
	I0319 20:26:35.616053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0319 20:26:35.616062       1 shared_informer.go:320] Caches are synced for service config
	I0319 20:26:35.585953       1 config.go:319] "Starting node config controller"
	I0319 20:26:35.616371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0319 20:26:35.616379       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a2f357d984af3fa66c19b9ba3ca0f648c939da3da84d779b4912538ec1edeff9] <==
	I0319 20:25:26.572729       1 server_linux.go:69] "Using iptables proxy"
	I0319 20:25:26.670402       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.116"]
	I0319 20:25:26.719198       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0319 20:25:26.719296       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:25:26.719326       1 server_linux.go:165] "Using iptables Proxier"
	I0319 20:25:26.723183       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:25:26.723916       1 server.go:872] "Version info" version="v1.30.0-beta.0"
	I0319 20:25:26.724096       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:25:26.726893       1 config.go:192] "Starting service config controller"
	I0319 20:25:26.727294       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0319 20:25:26.727524       1 config.go:101] "Starting endpoint slice config controller"
	I0319 20:25:26.727565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0319 20:25:26.728731       1 config.go:319] "Starting node config controller"
	I0319 20:25:26.728781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0319 20:25:26.828115       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0319 20:25:26.828202       1 shared_informer.go:320] Caches are synced for service config
	I0319 20:25:26.829587       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [66d3cdd006fbe5eca343c35fea7775be65e2899678cec10d850366883de44f25] <==
	I0319 20:26:33.854307       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:26:33.854323       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0319 20:26:33.867260       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:26:33.867343       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:26:33.870414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0319 20:26:33.870476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0319 20:26:33.870743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0319 20:26:33.870808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0319 20:26:33.876679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 20:26:33.876739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 20:26:33.876838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 20:26:33.876863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 20:26:33.879072       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 20:26:33.879146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 20:26:33.880378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 20:26:33.880480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:26:33.880593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 20:26:33.880647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 20:26:33.880705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 20:26:33.880717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 20:26:33.880825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:26:33.880872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:26:33.884537       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0319 20:26:33.884602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I0319 20:26:34.854643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [93e923f2a08662be7e9bc45c545ef2fa12e522f259a52ea6097e236e8dd22065] <==
	E0319 20:25:10.036884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 20:25:10.044857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:25:10.045064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:25:10.081500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 20:25:10.081572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 20:25:10.187222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:25:10.187277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:25:10.200897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 20:25:10.201166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 20:25:10.276188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 20:25:10.276288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:25:10.314773       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:25:10.315058       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:25:10.328227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 20:25:10.328339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 20:25:10.406098       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 20:25:10.407416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 20:25:10.447671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 20:25:10.447755       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 20:25:10.526149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:25:10.526238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 20:25:10.531221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 20:25:10.531294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0319 20:25:12.554227       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0319 20:26:13.088470       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.268419    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.416414    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.509276    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.509301    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.509306    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.518824    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.518846    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.518851    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.527803    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.527829    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:35 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:35.527836    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:36 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:36.535891    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:36 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:36.535912    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:36 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:36.535916    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:37 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:37.541590    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:37 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:37.541660    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:37 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:37.541668    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:37 kubernetes-upgrade-853797 kubelet[2393]: I0319 20:26:37.541696    2393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:39.540211    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:39.540278    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:39.540285    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: I0319 20:26:39.540303    2393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:39.564832    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:39.564885    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:26:39 kubernetes-upgrade-853797 kubelet[2393]: E0319 20:26:39.564894    2393 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	
	
	==> storage-provisioner [8c26142f6734ead463e63adc785d5aae60edf97fa7c02937ccf7d0b6ae9c18c6] <==
	I0319 20:25:57.057546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:25:57.065920       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:25:57.069249       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:25:57.084541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:25:57.085478       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acf47b6e-7214-4b8a-89b0-684daad3a4e0", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-853797_f5b91383-28df-4905-b9c9-0886119f9c97 became leader
	I0319 20:25:57.086030       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-853797_f5b91383-28df-4905-b9c9-0886119f9c97!
	I0319 20:25:57.186457       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-853797_f5b91383-28df-4905-b9c9-0886119f9c97!
	
	
	==> storage-provisioner [93a7870c5a1103788e1238b9a95ebf4f2c485a360c16438a5f8c45bc30813463] <==
	I0319 20:26:35.223114       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:26:35.262061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:26:35.265792       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:26:38.339704   57230 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18453-10028/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-853797 -n kubernetes-upgrade-853797
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-853797 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-853797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-853797
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-853797: (1.132931451s)
--- FAIL: TestKubernetesUpgrade (451.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-746219 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-746219 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.133506725s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-746219] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-746219" primary control-plane node in "pause-746219" cluster
	* Updating the running kvm2 "pause-746219" VM ...
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-746219" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:23:45.621588   54890 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:23:45.621743   54890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:23:45.621754   54890 out.go:304] Setting ErrFile to fd 2...
	I0319 20:23:45.621760   54890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:23:45.622032   54890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:23:45.622700   54890 out.go:298] Setting JSON to false
	I0319 20:23:45.623709   54890 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7524,"bootTime":1710872302,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:23:45.623774   54890 start.go:139] virtualization: kvm guest
	I0319 20:23:45.626212   54890 out.go:177] * [pause-746219] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:23:45.627683   54890 notify.go:220] Checking for updates...
	I0319 20:23:45.627687   54890 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:23:45.629212   54890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:23:45.630630   54890 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:23:45.631990   54890 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:23:45.633238   54890 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:23:45.634523   54890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:23:45.636282   54890 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:23:45.636918   54890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:23:45.637003   54890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:23:45.652848   54890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41923
	I0319 20:23:45.653291   54890 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:23:45.653811   54890 main.go:141] libmachine: Using API Version  1
	I0319 20:23:45.653844   54890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:23:45.654266   54890 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:23:45.654442   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:23:45.654727   54890 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:23:45.655124   54890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:23:45.655192   54890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:23:45.670311   54890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0319 20:23:45.670724   54890 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:23:45.671303   54890 main.go:141] libmachine: Using API Version  1
	I0319 20:23:45.671334   54890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:23:45.671604   54890 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:23:45.671779   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:23:45.709460   54890 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:23:45.710799   54890 start.go:297] selected driver: kvm2
	I0319 20:23:45.710814   54890 start.go:901] validating driver "kvm2" against &{Name:pause-746219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-746219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:23:45.711007   54890 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:23:45.711464   54890 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:23:45.711553   54890 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:23:45.728971   54890 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:23:45.729677   54890 cni.go:84] Creating CNI manager for ""
	I0319 20:23:45.729700   54890 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:23:45.729779   54890 start.go:340] cluster config:
	{Name:pause-746219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-746219 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:23:45.729954   54890 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:23:45.732111   54890 out.go:177] * Starting "pause-746219" primary control-plane node in "pause-746219" cluster
	I0319 20:23:45.733595   54890 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:23:45.733627   54890 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:23:45.733634   54890 cache.go:56] Caching tarball of preloaded images
	I0319 20:23:45.733702   54890 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:23:45.733713   54890 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:23:45.733826   54890 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/config.json ...
	I0319 20:23:45.734014   54890 start.go:360] acquireMachinesLock for pause-746219: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:23:59.825818   54890 start.go:364] duration metric: took 14.091745692s to acquireMachinesLock for "pause-746219"
	I0319 20:23:59.825867   54890 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:23:59.825876   54890 fix.go:54] fixHost starting: 
	I0319 20:23:59.826257   54890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:23:59.826303   54890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:23:59.843161   54890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0319 20:23:59.843581   54890 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:23:59.844065   54890 main.go:141] libmachine: Using API Version  1
	I0319 20:23:59.844085   54890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:23:59.844511   54890 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:23:59.844724   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:23:59.844871   54890 main.go:141] libmachine: (pause-746219) Calling .GetState
	I0319 20:23:59.846389   54890 fix.go:112] recreateIfNeeded on pause-746219: state=Running err=<nil>
	W0319 20:23:59.846411   54890 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:23:59.848836   54890 out.go:177] * Updating the running kvm2 "pause-746219" VM ...
	I0319 20:23:59.851102   54890 machine.go:94] provisionDockerMachine start ...
	I0319 20:23:59.851128   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:23:59.851349   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:23:59.854030   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:23:59.854437   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:23:59.854480   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:23:59.854578   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:23:59.855059   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:23:59.855201   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:23:59.855317   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:23:59.855488   54890 main.go:141] libmachine: Using SSH client type: native
	I0319 20:23:59.855676   54890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0319 20:23:59.855689   54890 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:23:59.973752   54890 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-746219
	
	I0319 20:23:59.973780   54890 main.go:141] libmachine: (pause-746219) Calling .GetMachineName
	I0319 20:23:59.974023   54890 buildroot.go:166] provisioning hostname "pause-746219"
	I0319 20:23:59.974054   54890 main.go:141] libmachine: (pause-746219) Calling .GetMachineName
	I0319 20:23:59.974225   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:23:59.977230   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:23:59.977668   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:23:59.977698   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:23:59.977830   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:23:59.978024   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:23:59.978166   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:23:59.978281   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:23:59.978492   54890 main.go:141] libmachine: Using SSH client type: native
	I0319 20:23:59.978674   54890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0319 20:23:59.978688   54890 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-746219 && echo "pause-746219" | sudo tee /etc/hostname
	I0319 20:24:00.111156   54890 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-746219
	
	I0319 20:24:00.111186   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:00.113956   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.114277   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:00.114315   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.114453   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:00.114647   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:00.114826   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:00.114942   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:00.115115   54890 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:00.115289   54890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0319 20:24:00.115305   54890 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-746219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-746219/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-746219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:24:00.233487   54890 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:24:00.233512   54890 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:24:00.233541   54890 buildroot.go:174] setting up certificates
	I0319 20:24:00.233553   54890 provision.go:84] configureAuth start
	I0319 20:24:00.233566   54890 main.go:141] libmachine: (pause-746219) Calling .GetMachineName
	I0319 20:24:00.233823   54890 main.go:141] libmachine: (pause-746219) Calling .GetIP
	I0319 20:24:00.236472   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.236860   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:00.236885   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.237041   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:00.239109   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.239367   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:00.239404   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.239494   54890 provision.go:143] copyHostCerts
	I0319 20:24:00.239560   54890 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:24:00.239573   54890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:24:00.239639   54890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:24:00.239760   54890 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:24:00.239778   54890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:24:00.239810   54890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:24:00.239943   54890 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:24:00.239954   54890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:24:00.239983   54890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:24:00.240065   54890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.pause-746219 san=[127.0.0.1 192.168.39.29 localhost minikube pause-746219]
	I0319 20:24:00.324240   54890 provision.go:177] copyRemoteCerts
	I0319 20:24:00.324335   54890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:24:00.324369   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:00.327069   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.327409   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:00.327446   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.327664   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:00.327836   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:00.327997   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:00.328171   54890 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/pause-746219/id_rsa Username:docker}
	I0319 20:24:00.429320   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0319 20:24:00.457756   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:24:00.488272   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:24:00.521733   54890 provision.go:87] duration metric: took 288.167546ms to configureAuth
	I0319 20:24:00.521784   54890 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:24:00.521994   54890 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:00.522064   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:00.524956   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.525353   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:00.525402   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:00.525718   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:00.525942   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:00.526117   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:00.526291   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:00.526481   54890 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:00.526697   54890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0319 20:24:00.526718   54890 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:24:06.109024   54890 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:24:06.109054   54890 machine.go:97] duration metric: took 6.257933714s to provisionDockerMachine
	I0319 20:24:06.109068   54890 start.go:293] postStartSetup for "pause-746219" (driver="kvm2")
	I0319 20:24:06.109082   54890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:24:06.109101   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:24:06.109402   54890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:24:06.109428   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:06.112330   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.112684   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:06.112730   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.112906   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:06.113102   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:06.113299   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:06.113498   54890 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/pause-746219/id_rsa Username:docker}
	I0319 20:24:06.207238   54890 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:24:06.212612   54890 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:24:06.212637   54890 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:24:06.212704   54890 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:24:06.212796   54890 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:24:06.212905   54890 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:24:06.223475   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:24:06.249226   54890 start.go:296] duration metric: took 140.148095ms for postStartSetup
	I0319 20:24:06.249259   54890 fix.go:56] duration metric: took 6.423384118s for fixHost
	I0319 20:24:06.249279   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:06.251591   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.251980   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:06.252020   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.252154   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:06.252347   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:06.252532   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:06.252645   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:06.252810   54890 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:06.253008   54890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0319 20:24:06.253023   54890 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0319 20:24:06.365304   54890 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879846.364970853
	
	I0319 20:24:06.365325   54890 fix.go:216] guest clock: 1710879846.364970853
	I0319 20:24:06.365332   54890 fix.go:229] Guest: 2024-03-19 20:24:06.364970853 +0000 UTC Remote: 2024-03-19 20:24:06.249262775 +0000 UTC m=+20.680674763 (delta=115.708078ms)
	I0319 20:24:06.365353   54890 fix.go:200] guest clock delta is within tolerance: 115.708078ms
	I0319 20:24:06.365358   54890 start.go:83] releasing machines lock for "pause-746219", held for 6.539511996s
	I0319 20:24:06.365393   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:24:06.365724   54890 main.go:141] libmachine: (pause-746219) Calling .GetIP
	I0319 20:24:06.368773   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.369181   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:06.369208   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.369399   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:24:06.369975   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:24:06.370146   54890 main.go:141] libmachine: (pause-746219) Calling .DriverName
	I0319 20:24:06.370244   54890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:24:06.370289   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:06.370380   54890 ssh_runner.go:195] Run: cat /version.json
	I0319 20:24:06.370400   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHHostname
	I0319 20:24:06.373125   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.373487   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:06.373512   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.373533   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.373671   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:06.373846   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:06.374001   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:06.374069   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:06.374032   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:06.374192   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHPort
	I0319 20:24:06.374276   54890 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/pause-746219/id_rsa Username:docker}
	I0319 20:24:06.374303   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHKeyPath
	I0319 20:24:06.374421   54890 main.go:141] libmachine: (pause-746219) Calling .GetSSHUsername
	I0319 20:24:06.374578   54890 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/pause-746219/id_rsa Username:docker}
	I0319 20:24:06.480324   54890 ssh_runner.go:195] Run: systemctl --version
	I0319 20:24:06.488374   54890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:24:06.669446   54890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:24:06.678049   54890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:24:06.678132   54890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:24:06.691777   54890 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 20:24:06.691801   54890 start.go:494] detecting cgroup driver to use...
	I0319 20:24:06.691891   54890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:24:06.720624   54890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:24:06.739661   54890 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:24:06.739716   54890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:24:06.758619   54890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:24:06.779344   54890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:24:06.976821   54890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:24:07.173311   54890 docker.go:233] disabling docker service ...
	I0319 20:24:07.173376   54890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:24:07.230284   54890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:24:07.248197   54890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:24:07.432683   54890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:24:07.638755   54890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:24:07.678615   54890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:24:07.703236   54890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:24:07.703294   54890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:07.715695   54890 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:24:07.715744   54890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:07.816821   54890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:07.994379   54890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:08.048728   54890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:24:08.088322   54890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:08.135852   54890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:08.224787   54890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:08.249257   54890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:24:08.279853   54890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:24:08.368309   54890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:08.714945   54890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:24:09.855054   54890 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.140076478s)
	I0319 20:24:09.855086   54890 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:24:09.855136   54890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:24:09.861092   54890 start.go:562] Will wait 60s for crictl version
	I0319 20:24:09.861150   54890 ssh_runner.go:195] Run: which crictl
	I0319 20:24:09.867857   54890 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:24:09.924404   54890 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:24:09.924497   54890 ssh_runner.go:195] Run: crio --version
	I0319 20:24:09.964901   54890 ssh_runner.go:195] Run: crio --version
	I0319 20:24:10.004153   54890 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:24:10.005350   54890 main.go:141] libmachine: (pause-746219) Calling .GetIP
	I0319 20:24:10.008269   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:10.008644   54890 main.go:141] libmachine: (pause-746219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:34:06", ip: ""} in network mk-pause-746219: {Iface:virbr1 ExpiryTime:2024-03-19 21:22:58 +0000 UTC Type:0 Mac:52:54:00:ff:34:06 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:pause-746219 Clientid:01:52:54:00:ff:34:06}
	I0319 20:24:10.008668   54890 main.go:141] libmachine: (pause-746219) DBG | domain pause-746219 has defined IP address 192.168.39.29 and MAC address 52:54:00:ff:34:06 in network mk-pause-746219
	I0319 20:24:10.008949   54890 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:24:10.015016   54890 kubeadm.go:877] updating cluster {Name:pause-746219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-746219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:24:10.015185   54890 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:24:10.015252   54890 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:24:10.062198   54890 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:24:10.062226   54890 crio.go:433] Images already preloaded, skipping extraction
	I0319 20:24:10.062279   54890 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:24:10.105401   54890 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:24:10.105427   54890 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:24:10.105437   54890 kubeadm.go:928] updating node { 192.168.39.29 8443 v1.29.3 crio true true} ...
	I0319 20:24:10.105579   54890 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-746219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-746219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:24:10.105673   54890 ssh_runner.go:195] Run: crio config
	I0319 20:24:10.168590   54890 cni.go:84] Creating CNI manager for ""
	I0319 20:24:10.168619   54890 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:24:10.168634   54890 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:24:10.168660   54890 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-746219 NodeName:pause-746219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:24:10.168855   54890 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-746219"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:24:10.168933   54890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:24:10.203166   54890 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:24:10.203248   54890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:24:10.236746   54890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0319 20:24:10.299631   54890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:24:10.355335   54890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0319 20:24:10.399660   54890 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:24:10.415216   54890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:10.892601   54890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:24:10.929126   54890 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219 for IP: 192.168.39.29
	I0319 20:24:10.929209   54890 certs.go:194] generating shared ca certs ...
	I0319 20:24:10.929235   54890 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:10.929449   54890 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:24:10.929510   54890 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:24:10.929532   54890 certs.go:256] generating profile certs ...
	I0319 20:24:10.929673   54890 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/client.key
	I0319 20:24:10.929765   54890 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/apiserver.key.917c6ab7
	I0319 20:24:10.929834   54890 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/proxy-client.key
	I0319 20:24:10.929994   54890 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:24:10.930034   54890 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:24:10.930047   54890 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:24:10.930087   54890 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:24:10.930145   54890 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:24:10.930208   54890 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:24:10.930292   54890 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:24:10.931169   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:24:10.991580   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:24:11.059696   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:24:11.118128   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:24:11.163456   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0319 20:24:11.211815   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:24:11.253834   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:24:11.293920   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/pause-746219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:24:11.357436   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:24:11.428716   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:24:11.468673   54890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:24:11.510114   54890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:24:11.537509   54890 ssh_runner.go:195] Run: openssl version
	I0319 20:24:11.546232   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:24:11.562047   54890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:24:11.569305   54890 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:24:11.569368   54890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:24:11.577719   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:24:11.593573   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:24:11.609971   54890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:24:11.616392   54890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:24:11.616456   54890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:24:11.623690   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:24:11.635344   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:24:11.649298   54890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:24:11.654739   54890 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:24:11.654796   54890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:24:11.661566   54890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:24:11.672951   54890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:24:11.680325   54890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:24:11.688799   54890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:24:11.697552   54890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:24:11.704825   54890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:24:11.711789   54890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:24:11.720044   54890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:24:11.729083   54890 kubeadm.go:391] StartCluster: {Name:pause-746219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-746219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:24:11.729201   54890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:24:11.729257   54890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:24:11.780888   54890 cri.go:89] found id: "cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af"
	I0319 20:24:11.780922   54890 cri.go:89] found id: "80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9"
	I0319 20:24:11.780927   54890 cri.go:89] found id: "c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e"
	I0319 20:24:11.780931   54890 cri.go:89] found id: "4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523"
	I0319 20:24:11.780934   54890 cri.go:89] found id: "dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e"
	I0319 20:24:11.780938   54890 cri.go:89] found id: "8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267"
	I0319 20:24:11.780942   54890 cri.go:89] found id: ""
	I0319 20:24:11.781004   54890 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-746219 -n pause-746219
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-746219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-746219 logs -n 25: (4.386098791s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-378078 sudo docker                         | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo find                           | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo crio                           | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-378078                                     | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	| start   | -p force-systemd-env-587385                          | force-systemd-env-587385  | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-746219                                      | pause-746219              | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-910871 ssh cat                    | force-systemd-flag-910871 | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-910871                         | force-systemd-flag-910871 | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	| start   | -p cert-options-346618                               | cert-options-346618       | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-853797                         | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p kubernetes-upgrade-853797                         | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-587385                          | force-systemd-env-587385  | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p cert-expiration-428153                            | cert-expiration-428153    | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:24:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:24:23.725286   55554 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:24:23.725407   55554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:24:23.725412   55554 out.go:304] Setting ErrFile to fd 2...
	I0319 20:24:23.725415   55554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:24:23.725619   55554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:24:23.726137   55554 out.go:298] Setting JSON to false
	I0319 20:24:23.727023   55554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7562,"bootTime":1710872302,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:24:23.727071   55554 start.go:139] virtualization: kvm guest
	I0319 20:24:23.729761   55554 out.go:177] * [cert-expiration-428153] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:24:23.731532   55554 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:24:23.731496   55554 notify.go:220] Checking for updates...
	I0319 20:24:23.733243   55554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:24:23.734952   55554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:24:23.736453   55554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:24:23.737895   55554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:24:23.739279   55554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:24:23.741400   55554 config.go:182] Loaded profile config "cert-options-346618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:23.741477   55554 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:24:23.741585   55554 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:23.741660   55554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:24:23.778255   55554 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:24:23.779935   55554 start.go:297] selected driver: kvm2
	I0319 20:24:23.779942   55554 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:24:23.779952   55554 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:24:23.780733   55554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:24:23.780801   55554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:24:23.795716   55554 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:24:23.795754   55554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 20:24:23.795974   55554 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 20:24:23.796023   55554 cni.go:84] Creating CNI manager for ""
	I0319 20:24:23.796031   55554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:24:23.796044   55554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 20:24:23.796086   55554 start.go:340] cluster config:
	{Name:cert-expiration-428153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-expiration-428153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:24:23.796161   55554 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:24:23.798527   55554 out.go:177] * Starting "cert-expiration-428153" primary control-plane node in "cert-expiration-428153" cluster
	I0319 20:24:20.224580   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:20.225071   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find current IP address of domain cert-options-346618 in network mk-cert-options-346618
	I0319 20:24:20.225087   55088 main.go:141] libmachine: (cert-options-346618) DBG | I0319 20:24:20.225019   55186 retry.go:31] will retry after 2.772249273s: waiting for machine to come up
	I0319 20:24:22.998132   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:23.013869   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find current IP address of domain cert-options-346618 in network mk-cert-options-346618
	I0319 20:24:23.013888   55088 main.go:141] libmachine: (cert-options-346618) DBG | I0319 20:24:23.013801   55186 retry.go:31] will retry after 3.666703177s: waiting for machine to come up
	I0319 20:24:22.516500   54890 pod_ready.go:102] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:24.521252   54890 pod_ready.go:102] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:25.516083   54890 pod_ready.go:92] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:25.516105   54890 pod_ready.go:81] duration metric: took 5.006998774s for pod "etcd-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:25.516114   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:23.800189   55554 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:24:23.800228   55554 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:24:23.800233   55554 cache.go:56] Caching tarball of preloaded images
	I0319 20:24:23.800371   55554 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:24:23.800385   55554 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:24:23.800481   55554 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/config.json ...
	I0319 20:24:23.800503   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/config.json: {Name:mk87e481ad903f92be7d6a0d22d14bb92d36dbbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:23.800665   55554 start.go:360] acquireMachinesLock for cert-expiration-428153: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:24:26.682167   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:26.682722   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find current IP address of domain cert-options-346618 in network mk-cert-options-346618
	I0319 20:24:26.682739   55088 main.go:141] libmachine: (cert-options-346618) DBG | I0319 20:24:26.682672   55186 retry.go:31] will retry after 4.107731031s: waiting for machine to come up
	I0319 20:24:27.523894   54890 pod_ready.go:102] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:30.023918   54890 pod_ready.go:102] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:32.373812   55361 start.go:364] duration metric: took 19.03028141s to acquireMachinesLock for "kubernetes-upgrade-853797"
	I0319 20:24:32.373861   55361 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:24:32.373869   55361 fix.go:54] fixHost starting: 
	I0319 20:24:32.374258   55361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:24:32.374308   55361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:24:32.392058   55361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0319 20:24:32.392515   55361 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:24:32.392999   55361 main.go:141] libmachine: Using API Version  1
	I0319 20:24:32.393018   55361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:24:32.393362   55361 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:24:32.393576   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:24:32.393722   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetState
	I0319 20:24:32.395582   55361 fix.go:112] recreateIfNeeded on kubernetes-upgrade-853797: state=Stopped err=<nil>
	I0319 20:24:32.395608   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	W0319 20:24:32.395761   55361 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:24:32.398045   55361 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-853797" ...
	I0319 20:24:32.399697   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .Start
	I0319 20:24:32.399869   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring networks are active...
	I0319 20:24:32.400624   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring network default is active
	I0319 20:24:32.401021   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring network mk-kubernetes-upgrade-853797 is active
	I0319 20:24:32.401447   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Getting domain xml...
	I0319 20:24:32.402186   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Creating domain...
	I0319 20:24:30.794840   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.795342   55088 main.go:141] libmachine: (cert-options-346618) Found IP for machine: 192.168.61.123
	I0319 20:24:30.795360   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has current primary IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.795380   55088 main.go:141] libmachine: (cert-options-346618) Reserving static IP address...
	I0319 20:24:30.795827   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find host DHCP lease matching {name: "cert-options-346618", mac: "52:54:00:0e:ec:75", ip: "192.168.61.123"} in network mk-cert-options-346618
	I0319 20:24:30.873231   55088 main.go:141] libmachine: (cert-options-346618) DBG | Getting to WaitForSSH function...
	I0319 20:24:30.873253   55088 main.go:141] libmachine: (cert-options-346618) Reserved static IP address: 192.168.61.123
	I0319 20:24:30.873266   55088 main.go:141] libmachine: (cert-options-346618) Waiting for SSH to be available...
	I0319 20:24:30.876571   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.877021   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:30.877043   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.877165   55088 main.go:141] libmachine: (cert-options-346618) DBG | Using SSH client type: external
	I0319 20:24:30.877188   55088 main.go:141] libmachine: (cert-options-346618) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa (-rw-------)
	I0319 20:24:30.877228   55088 main.go:141] libmachine: (cert-options-346618) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:24:30.877242   55088 main.go:141] libmachine: (cert-options-346618) DBG | About to run SSH command:
	I0319 20:24:30.877254   55088 main.go:141] libmachine: (cert-options-346618) DBG | exit 0
	I0319 20:24:31.009165   55088 main.go:141] libmachine: (cert-options-346618) DBG | SSH cmd err, output: <nil>: 
	I0319 20:24:31.009486   55088 main.go:141] libmachine: (cert-options-346618) KVM machine creation complete!
	I0319 20:24:31.009840   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetConfigRaw
	I0319 20:24:31.010460   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:31.010662   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:31.010842   55088 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:24:31.010852   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetState
	I0319 20:24:31.012040   55088 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:24:31.012048   55088 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:24:31.012054   55088 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:24:31.012062   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.014427   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.014882   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.014918   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.015028   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.015206   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.015375   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.015493   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.015625   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.015865   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.015874   55088 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:24:31.123929   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:24:31.123943   55088 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:24:31.123951   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.126736   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.127071   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.127083   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.127242   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.127434   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.127591   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.127738   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.127863   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.128048   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.128053   55088 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:24:31.237297   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:24:31.237363   55088 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:24:31.237368   55088 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:24:31.237375   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetMachineName
	I0319 20:24:31.237622   55088 buildroot.go:166] provisioning hostname "cert-options-346618"
	I0319 20:24:31.237640   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetMachineName
	I0319 20:24:31.237832   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.240306   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.240687   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.240709   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.240865   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.241043   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.241204   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.241344   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.241535   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.241685   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.241692   55088 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-346618 && echo "cert-options-346618" | sudo tee /etc/hostname
	I0319 20:24:31.368791   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-346618
	
	I0319 20:24:31.368807   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.371327   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.371637   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.371647   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.371907   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.372075   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.372202   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.372349   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.372505   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.372678   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.372694   55088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-346618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-346618/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-346618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:24:31.496816   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:24:31.496834   55088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:24:31.496849   55088 buildroot.go:174] setting up certificates
	I0319 20:24:31.496857   55088 provision.go:84] configureAuth start
	I0319 20:24:31.496864   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetMachineName
	I0319 20:24:31.497119   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:31.500356   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.500742   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.500764   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.500930   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.503447   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.503760   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.503780   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.503936   55088 provision.go:143] copyHostCerts
	I0319 20:24:31.503995   55088 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:24:31.504000   55088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:24:31.504051   55088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:24:31.504127   55088 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:24:31.504130   55088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:24:31.504152   55088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:24:31.504193   55088 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:24:31.504196   55088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:24:31.504221   55088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:24:31.504293   55088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.cert-options-346618 san=[127.0.0.1 192.168.61.123 cert-options-346618 localhost minikube]
	I0319 20:24:31.653767   55088 provision.go:177] copyRemoteCerts
	I0319 20:24:31.653808   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:24:31.653837   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.656283   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.656611   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.656634   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.656815   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.657008   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.657137   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.657279   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:31.743193   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:24:31.772267   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:24:31.799828   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:24:31.827928   55088 provision.go:87] duration metric: took 331.061574ms to configureAuth
	I0319 20:24:31.827943   55088 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:24:31.828091   55088 config.go:182] Loaded profile config "cert-options-346618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:31.828148   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.831103   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.831496   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.831521   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.831707   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.831890   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.832059   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.832200   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.832390   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.832550   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.832563   55088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:24:32.116498   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:24:32.116516   55088 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:24:32.116525   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetURL
	I0319 20:24:32.117984   55088 main.go:141] libmachine: (cert-options-346618) DBG | Using libvirt version 6000000
	I0319 20:24:32.120296   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.120683   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.120708   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.120877   55088 main.go:141] libmachine: Docker is up and running!
	I0319 20:24:32.120887   55088 main.go:141] libmachine: Reticulating splines...
	I0319 20:24:32.120893   55088 client.go:171] duration metric: took 25.731119186s to LocalClient.Create
	I0319 20:24:32.120917   55088 start.go:167] duration metric: took 25.731197527s to libmachine.API.Create "cert-options-346618"
	I0319 20:24:32.120922   55088 start.go:293] postStartSetup for "cert-options-346618" (driver="kvm2")
	I0319 20:24:32.120930   55088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:24:32.120942   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.121163   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:24:32.121181   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.123533   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.123877   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.123900   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.124080   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.124229   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.124380   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.124522   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:32.211256   55088 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:24:32.216107   55088 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:24:32.216126   55088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:24:32.216181   55088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:24:32.216253   55088 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:24:32.216412   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:24:32.226216   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:24:32.253707   55088 start.go:296] duration metric: took 132.774704ms for postStartSetup
	I0319 20:24:32.253762   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetConfigRaw
	I0319 20:24:32.254328   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:32.257045   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.257388   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.257403   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.257661   55088 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/config.json ...
	I0319 20:24:32.257822   55088 start.go:128] duration metric: took 25.892205293s to createHost
	I0319 20:24:32.257842   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.260124   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.260503   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.260525   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.260618   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.260810   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.260984   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.261109   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.261257   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:32.261425   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:32.261430   55088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:24:32.373639   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879872.313453987
	
	I0319 20:24:32.373653   55088 fix.go:216] guest clock: 1710879872.313453987
	I0319 20:24:32.373663   55088 fix.go:229] Guest: 2024-03-19 20:24:32.313453987 +0000 UTC Remote: 2024-03-19 20:24:32.25783231 +0000 UTC m=+37.571314777 (delta=55.621677ms)
	I0319 20:24:32.373726   55088 fix.go:200] guest clock delta is within tolerance: 55.621677ms
	I0319 20:24:32.373736   55088 start.go:83] releasing machines lock for "cert-options-346618", held for 26.008280751s
	I0319 20:24:32.373772   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.374097   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:32.376955   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.377373   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.377418   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.377544   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.378220   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.378397   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.378471   55088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:24:32.378514   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.378607   55088 ssh_runner.go:195] Run: cat /version.json
	I0319 20:24:32.378622   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.381296   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.381631   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.381658   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.381675   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.381837   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.382006   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.382175   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.382181   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.382200   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.382319   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:32.382351   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.382486   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.382640   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.382810   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:32.470499   55088 ssh_runner.go:195] Run: systemctl --version
	I0319 20:24:32.496506   55088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:24:32.664452   55088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:24:32.672787   55088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:24:32.672849   55088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:24:32.691540   55088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:24:32.691552   55088 start.go:494] detecting cgroup driver to use...
	I0319 20:24:32.691603   55088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:24:32.710523   55088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:24:32.728372   55088 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:24:32.728425   55088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:24:32.745168   55088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:24:32.763556   55088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:24:32.895708   55088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:24:33.054418   55088 docker.go:233] disabling docker service ...
	I0319 20:24:33.054462   55088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:24:33.074599   55088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:24:33.092858   55088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:24:33.261128   55088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:24:33.425464   55088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:24:33.443581   55088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:24:33.467192   55088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:24:33.467273   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.480401   55088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:24:33.480471   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.492472   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.504380   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.516746   55088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:24:33.530157   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.542865   55088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.563632   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.576934   55088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:24:33.587765   55088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:24:33.587808   55088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:24:33.603546   55088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:24:33.615406   55088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:33.769572   55088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:24:33.925026   55088 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:24:33.925097   55088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:24:33.931181   55088 start.go:562] Will wait 60s for crictl version
	I0319 20:24:33.931249   55088 ssh_runner.go:195] Run: which crictl
	I0319 20:24:33.935984   55088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:24:33.984596   55088 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:24:33.984677   55088 ssh_runner.go:195] Run: crio --version
	I0319 20:24:34.024124   55088 ssh_runner.go:195] Run: crio --version
	I0319 20:24:34.062129   55088 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:24:32.530081   54890 pod_ready.go:102] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:34.025904   54890 pod_ready.go:92] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.025933   54890 pod_ready.go:81] duration metric: took 8.509811287s for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.025947   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.535319   54890 pod_ready.go:92] pod "kube-controller-manager-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.535352   54890 pod_ready.go:81] duration metric: took 509.395321ms for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.535373   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.541859   54890 pod_ready.go:92] pod "kube-proxy-dtc7z" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.541893   54890 pod_ready.go:81] duration metric: took 6.5098ms for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.541906   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.548341   54890 pod_ready.go:92] pod "kube-scheduler-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.548369   54890 pod_ready.go:81] duration metric: took 6.452173ms for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.548380   54890 pod_ready.go:38] duration metric: took 14.056332754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:24:34.548402   54890 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:24:34.566095   54890 ops.go:34] apiserver oom_adj: -16
	I0319 20:24:34.566115   54890 kubeadm.go:591] duration metric: took 22.688476668s to restartPrimaryControlPlane
	I0319 20:24:34.566127   54890 kubeadm.go:393] duration metric: took 22.837053133s to StartCluster
	I0319 20:24:34.566145   54890 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:34.566216   54890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:24:34.566971   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:34.567197   54890 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:24:34.569177   54890 out.go:177] * Verifying Kubernetes components...
	I0319 20:24:34.567273   54890 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:24:34.567441   54890 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:34.570726   54890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:34.572318   54890 out.go:177] * Enabled addons: 
	I0319 20:24:34.063439   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:34.066508   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:34.066874   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:34.066893   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:34.067112   55088 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:24:34.072986   55088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:24:34.091336   55088 kubeadm.go:877] updating cluster {Name:cert-options-346618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:cert-options-346618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:24:34.091431   55088 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:24:34.091498   55088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:24:34.125633   55088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:24:34.125681   55088 ssh_runner.go:195] Run: which lz4
	I0319 20:24:34.130354   55088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:24:34.135446   55088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:24:34.135469   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:24:34.573798   54890 addons.go:505] duration metric: took 6.527447ms for enable addons: enabled=[]
	I0319 20:24:34.798284   54890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:24:34.821845   54890 node_ready.go:35] waiting up to 6m0s for node "pause-746219" to be "Ready" ...
	I0319 20:24:34.826733   54890 node_ready.go:49] node "pause-746219" has status "Ready":"True"
	I0319 20:24:34.826758   54890 node_ready.go:38] duration metric: took 4.877005ms for node "pause-746219" to be "Ready" ...
	I0319 20:24:34.826770   54890 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:24:34.833425   54890 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-df6fq" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.841881   54890 pod_ready.go:92] pod "coredns-76f75df574-df6fq" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.841909   54890 pod_ready.go:81] duration metric: took 8.455556ms for pod "coredns-76f75df574-df6fq" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.841918   54890 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.222339   54890 pod_ready.go:92] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:35.222369   54890 pod_ready.go:81] duration metric: took 380.439665ms for pod "etcd-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.222380   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.622646   54890 pod_ready.go:92] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:35.623836   54890 pod_ready.go:81] duration metric: took 401.441624ms for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.623857   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.022312   54890 pod_ready.go:92] pod "kube-controller-manager-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:36.022338   54890 pod_ready.go:81] duration metric: took 398.472219ms for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.022353   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.421495   54890 pod_ready.go:92] pod "kube-proxy-dtc7z" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:36.421520   54890 pod_ready.go:81] duration metric: took 399.159369ms for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.421529   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.820855   54890 pod_ready.go:92] pod "kube-scheduler-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:36.820887   54890 pod_ready.go:81] duration metric: took 399.350816ms for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.820899   54890 pod_ready.go:38] duration metric: took 1.994117795s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:24:36.820917   54890 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:24:36.820979   54890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:24:36.837674   54890 api_server.go:72] duration metric: took 2.270445023s to wait for apiserver process to appear ...
	I0319 20:24:36.837702   54890 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:24:36.837724   54890 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0319 20:24:36.842306   54890 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0319 20:24:36.843919   54890 api_server.go:141] control plane version: v1.29.3
	I0319 20:24:36.843938   54890 api_server.go:131] duration metric: took 6.228808ms to wait for apiserver health ...
	I0319 20:24:36.843949   54890 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:24:37.023755   54890 system_pods.go:59] 6 kube-system pods found
	I0319 20:24:37.023787   54890 system_pods.go:61] "coredns-76f75df574-df6fq" [b061b790-e6e7-4ed9-9b30-edf71179954b] Running
	I0319 20:24:37.023793   54890 system_pods.go:61] "etcd-pause-746219" [f3194da2-cfb9-4527-89b9-5661373ac7a4] Running
	I0319 20:24:37.023797   54890 system_pods.go:61] "kube-apiserver-pause-746219" [8ed5bea1-554a-47ca-8ea7-94b2e1ff0e8f] Running
	I0319 20:24:37.023802   54890 system_pods.go:61] "kube-controller-manager-pause-746219" [21851eec-9279-4e5e-904f-23bf8c796279] Running
	I0319 20:24:37.023806   54890 system_pods.go:61] "kube-proxy-dtc7z" [ac3bbf7f-db46-4da0-aeee-b105b9202f35] Running
	I0319 20:24:37.023810   54890 system_pods.go:61] "kube-scheduler-pause-746219" [be26e770-cf88-47f9-94e2-015c782a89dc] Running
	I0319 20:24:37.023819   54890 system_pods.go:74] duration metric: took 179.863163ms to wait for pod list to return data ...
	I0319 20:24:37.023829   54890 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:24:37.221008   54890 default_sa.go:45] found service account: "default"
	I0319 20:24:37.221040   54890 default_sa.go:55] duration metric: took 197.204319ms for default service account to be created ...
	I0319 20:24:37.221051   54890 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:24:37.426742   54890 system_pods.go:86] 6 kube-system pods found
	I0319 20:24:37.426779   54890 system_pods.go:89] "coredns-76f75df574-df6fq" [b061b790-e6e7-4ed9-9b30-edf71179954b] Running
	I0319 20:24:37.426788   54890 system_pods.go:89] "etcd-pause-746219" [f3194da2-cfb9-4527-89b9-5661373ac7a4] Running
	I0319 20:24:37.426794   54890 system_pods.go:89] "kube-apiserver-pause-746219" [8ed5bea1-554a-47ca-8ea7-94b2e1ff0e8f] Running
	I0319 20:24:37.426801   54890 system_pods.go:89] "kube-controller-manager-pause-746219" [21851eec-9279-4e5e-904f-23bf8c796279] Running
	I0319 20:24:37.426807   54890 system_pods.go:89] "kube-proxy-dtc7z" [ac3bbf7f-db46-4da0-aeee-b105b9202f35] Running
	I0319 20:24:37.426813   54890 system_pods.go:89] "kube-scheduler-pause-746219" [be26e770-cf88-47f9-94e2-015c782a89dc] Running
	I0319 20:24:37.426823   54890 system_pods.go:126] duration metric: took 205.763828ms to wait for k8s-apps to be running ...
	I0319 20:24:37.426836   54890 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:24:37.426891   54890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:24:37.445840   54890 system_svc.go:56] duration metric: took 18.99447ms WaitForService to wait for kubelet
	I0319 20:24:37.445869   54890 kubeadm.go:576] duration metric: took 2.878643311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:24:37.445890   54890 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:24:37.620906   54890 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:24:37.620936   54890 node_conditions.go:123] node cpu capacity is 2
	I0319 20:24:37.620951   54890 node_conditions.go:105] duration metric: took 175.054177ms to run NodePressure ...
	I0319 20:24:37.620967   54890 start.go:240] waiting for startup goroutines ...
	I0319 20:24:37.620978   54890 start.go:245] waiting for cluster config update ...
	I0319 20:24:37.620992   54890 start.go:254] writing updated cluster config ...
	I0319 20:24:37.621375   54890 ssh_runner.go:195] Run: rm -f paused
	I0319 20:24:37.680126   54890 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:24:37.682483   54890 out.go:177] * Done! kubectl is now configured to use "pause-746219" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.479863357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879878479825910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cabd46e5-aac1-4120-9fa1-5c2b5359f74a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.480884279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=309f57cb-f57a-49d5-98d7-d9786e642e38 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.480957316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=309f57cb-f57a-49d5-98d7-d9786e642e38 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.481289804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=309f57cb-f57a-49d5-98d7-d9786e642e38 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.540229091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3558ab7d-6d8f-4901-bf1d-164cc9956e3f name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.540335037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3558ab7d-6d8f-4901-bf1d-164cc9956e3f name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.543036943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1e65351-bd16-4ee8-ae35-d35a2507cbab name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.543556900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879878543524578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1e65351-bd16-4ee8-ae35-d35a2507cbab name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.544688073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=358bc527-dd95-473b-8388-eddc53a6e73a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.544867842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=358bc527-dd95-473b-8388-eddc53a6e73a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.545241344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=358bc527-dd95-473b-8388-eddc53a6e73a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.596545226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74409930-5597-4d38-b607-ddfb26fad9f3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.596827662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74409930-5597-4d38-b607-ddfb26fad9f3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.599386374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa2732e6-cd3f-477f-b02e-9b3b5cebdb94 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.599933784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879878599907559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa2732e6-cd3f-477f-b02e-9b3b5cebdb94 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.600602906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9583424e-c96e-4aa5-bffb-f59ae45a0211 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.600654580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9583424e-c96e-4aa5-bffb-f59ae45a0211 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.601174857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9583424e-c96e-4aa5-bffb-f59ae45a0211 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.660685865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eab91b06-ac77-47d8-9105-acb679fc933e name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.660875499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eab91b06-ac77-47d8-9105-acb679fc933e name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.662179465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5996768e-641b-4858-8f97-025d4e226d52 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.662648579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879878662622273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5996768e-641b-4858-8f97-025d4e226d52 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.663229748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25ee3c2f-e666-4642-ad35-87117c639a05 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.663390499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25ee3c2f-e666-4642-ad35-87117c639a05 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:38 pause-746219 crio[2674]: time="2024-03-19 20:24:38.663679237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25ee3c2f-e666-4642-ad35-87117c639a05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66dcce3b597e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   1                   8d7bfa5a36ec2       coredns-76f75df574-df6fq
	e1d904b3263b7       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   19 seconds ago      Running             kube-proxy                2                   2db53363d84da       kube-proxy-dtc7z
	449b026f0e180       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   24 seconds ago      Running             kube-controller-manager   2                   8ae72d8342079       kube-controller-manager-pause-746219
	465cb7aa444ab       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   24 seconds ago      Running             kube-apiserver            2                   e9f70a383de70       kube-apiserver-pause-746219
	8259358ff5ffe       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   24 seconds ago      Running             kube-scheduler            2                   65e8e968acc8a       kube-scheduler-pause-746219
	42006b40a7bd7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   3cff552da57d2       etcd-pause-746219
	cb2b6f5b064b1       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   30 seconds ago      Exited              kube-proxy                1                   d4f301a947c37       kube-proxy-dtc7z
	80c4dff69ab18       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   30 seconds ago      Exited              kube-apiserver            1                   b7a9539febd1c       kube-apiserver-pause-746219
	c38e906f5bb79       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      1                   d8b899667fd74       etcd-pause-746219
	4a6072285f8d3       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   30 seconds ago      Exited              kube-controller-manager   1                   e89618feef782       kube-controller-manager-pause-746219
	dc3fa53814725       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   30 seconds ago      Exited              kube-scheduler            1                   4921aa607f46a       kube-scheduler-pause-746219
	8b1ee006a1cea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   55 seconds ago      Exited              coredns                   0                   6fe3fa0f01841       coredns-76f75df574-df6fq
	
	
	==> coredns [66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46312 - 23797 "HINFO IN 3137644523788626685.2320026330751616152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013625958s
	
	
	==> coredns [8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49618 - 22365 "HINFO IN 4862001822609258994.6606096006039772862. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010424627s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-746219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-746219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=pause-746219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_23_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-746219
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:24:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    pause-746219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 665fdf75264a4242bde6dba7d945c435
	  System UUID:                665fdf75-264a-4242-bde6-dba7d945c435
	  Boot ID:                    765e1546-1b69-4f7e-ba92-833c29d2c0aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-df6fq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-746219                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-pause-746219             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-pause-746219    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-dtc7z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-746219             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     73s                kubelet          Node pause-746219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node pause-746219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node pause-746219 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeReady                72s                kubelet          Node pause-746219 status is now: NodeReady
	  Normal  RegisteredNode           59s                node-controller  Node pause-746219 event: Registered Node pause-746219 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-746219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-746219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-746219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-746219 event: Registered Node pause-746219 in Controller
	
	
	==> dmesg <==
	[  +0.057624] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079152] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182769] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.149632] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.358581] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +5.037746] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059410] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.853174] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.546297] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.751758] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.072675] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.990183] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +0.130560] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 20:24] systemd-fstab-generator[2083]: Ignoring "noauto" option for root device
	[  +0.112670] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.090475] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.257144] systemd-fstab-generator[2136]: Ignoring "noauto" option for root device
	[  +0.218889] systemd-fstab-generator[2168]: Ignoring "noauto" option for root device
	[  +0.977782] systemd-fstab-generator[2464]: Ignoring "noauto" option for root device
	[  +2.177444] systemd-fstab-generator[2918]: Ignoring "noauto" option for root device
	[  +3.079391] systemd-fstab-generator[3146]: Ignoring "noauto" option for root device
	[  +0.080006] kauditd_printk_skb: 230 callbacks suppressed
	[  +5.608140] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.266167] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.936623] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	
	
	==> etcd [42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785] <==
	{"level":"info","ts":"2024-03-19T20:24:15.092669Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:24:15.09487Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-19T20:24:15.095013Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:24:15.095074Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:24:15.095089Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:24:15.095287Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-19T20:24:15.095324Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-19T20:24:15.097641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b switched to configuration voters=(10945199911802443307)"}
	{"level":"info","ts":"2024-03-19T20:24:15.097833Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","added-peer-id":"97e52954629f162b","added-peer-peer-urls":["https://192.168.39.29:2380"]}
	{"level":"info","ts":"2024-03-19T20:24:15.098019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:24:15.09806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:24:16.34102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-19T20:24:16.341141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:24:16.341216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgPreVoteResp from 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-03-19T20:24:16.341264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.341297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgVoteResp from 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.341332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.341366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.344226Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:pause-746219 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:24:16.344419Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:24:16.347404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-19T20:24:16.347516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:24:16.353878Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:24:16.353929Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:24:16.372258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.29:2379"}
	
	
	==> etcd [c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e] <==
	
	
	==> kernel <==
	 20:24:41 up 1 min,  0 users,  load average: 0.70, 0.32, 0.12
	Linux pause-746219 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7] <==
	I0319 20:24:18.325796       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0319 20:24:18.325836       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0319 20:24:18.325846       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0319 20:24:18.454592       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 20:24:18.480923       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 20:24:18.482357       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 20:24:18.482866       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 20:24:18.482920       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0319 20:24:18.482961       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 20:24:18.482877       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 20:24:18.482887       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0319 20:24:18.490868       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 20:24:18.512328       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 20:24:18.512463       1 aggregator.go:165] initial CRD sync complete...
	I0319 20:24:18.512506       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 20:24:18.512538       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 20:24:18.512568       1 cache.go:39] Caches are synced for autoregister controller
	I0319 20:24:19.285684       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 20:24:20.321715       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 20:24:20.335805       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 20:24:20.381898       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 20:24:20.448966       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 20:24:20.465710       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 20:24:30.698585       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 20:24:30.767454       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9] <==
	
	
	==> kube-controller-manager [449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a] <==
	I0319 20:24:30.746882       1 shared_informer.go:318] Caches are synced for node
	I0319 20:24:30.746969       1 range_allocator.go:174] "Sending events to api server"
	I0319 20:24:30.747034       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0319 20:24:30.747066       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0319 20:24:30.747075       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0319 20:24:30.751923       1 shared_informer.go:318] Caches are synced for taint
	I0319 20:24:30.752046       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0319 20:24:30.752219       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-746219"
	I0319 20:24:30.752283       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0319 20:24:30.752617       1 event.go:376] "Event occurred" object="pause-746219" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-746219 event: Registered Node pause-746219 in Controller"
	I0319 20:24:30.757701       1 shared_informer.go:318] Caches are synced for ephemeral
	I0319 20:24:30.759016       1 shared_informer.go:318] Caches are synced for endpoint
	I0319 20:24:30.763787       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0319 20:24:30.774987       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0319 20:24:30.801866       1 shared_informer.go:318] Caches are synced for attach detach
	I0319 20:24:30.806853       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0319 20:24:30.855234       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 20:24:30.866104       1 shared_informer.go:318] Caches are synced for deployment
	I0319 20:24:30.869160       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0319 20:24:30.869409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="95.481µs"
	I0319 20:24:30.881496       1 shared_informer.go:318] Caches are synced for disruption
	I0319 20:24:30.906236       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 20:24:31.282036       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 20:24:31.313618       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 20:24:31.313708       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523] <==
	
	
	==> kube-proxy [cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af] <==
	
	
	==> kube-proxy [e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08] <==
	I0319 20:24:19.628717       1 server_others.go:72] "Using iptables proxy"
	I0319 20:24:19.661957       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.29"]
	I0319 20:24:19.740225       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:24:19.740276       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:24:19.740301       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:24:19.743398       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:24:19.743622       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:24:19.743666       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:24:19.750033       1 config.go:188] "Starting service config controller"
	I0319 20:24:19.750305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:24:19.750393       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:24:19.750423       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:24:19.752715       1 config.go:315] "Starting node config controller"
	I0319 20:24:19.752970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:24:19.851376       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 20:24:19.851462       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:24:19.853664       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0] <==
	I0319 20:24:18.430231       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 20:24:18.430284       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:24:18.438536       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 20:24:18.438700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 20:24:18.438832       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:24:18.439850       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0319 20:24:18.449848       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:24:18.449926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:24:18.450012       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:24:18.450044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 20:24:18.450112       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:24:18.450141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:24:18.450223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 20:24:18.451879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 20:24:18.452164       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 20:24:18.452206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 20:24:18.452315       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:24:18.452360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:24:18.452410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0319 20:24:18.452448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 20:24:18.452491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 20:24:18.452527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 20:24:18.452217       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 20:24:18.454894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0319 20:24:18.539049       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e] <==
	I0319 20:24:09.177132       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.415150    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71cb0fb2f271718f665e384897f527e-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-746219\" (UID: \"a71cb0fb2f271718f665e384897f527e\") " pod="kube-system/kube-controller-manager-pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.415169    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/015828164ebb3a003f49e90fef000fe0-etcd-data\") pod \"etcd-pause-746219\" (UID: \"015828164ebb3a003f49e90fef000fe0\") " pod="kube-system/etcd-pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.415185    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be698cd9aca41bfb6299d91a22b548a1-ca-certs\") pod \"kube-apiserver-pause-746219\" (UID: \"be698cd9aca41bfb6299d91a22b548a1\") " pod="kube-system/kube-apiserver-pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: E0319 20:24:14.612573    3153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-746219?timeout=10s\": dial tcp 192.168.39.29:8443: connect: connection refused" interval="800ms"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.635308    3153 scope.go:117] "RemoveContainer" containerID="c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.636614    3153 scope.go:117] "RemoveContainer" containerID="80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.638412    3153 scope.go:117] "RemoveContainer" containerID="4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.639100    3153 scope.go:117] "RemoveContainer" containerID="dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.717856    3153 kubelet_node_status.go:73] "Attempting to register node" node="pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: E0319 20:24:14.718839    3153 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.29:8443: connect: connection refused" node="pause-746219"
	Mar 19 20:24:15 pause-746219 kubelet[3153]: W0319 20:24:15.005864    3153 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.29:8443: connect: connection refused
	Mar 19 20:24:15 pause-746219 kubelet[3153]: E0319 20:24:15.005984    3153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.29:8443: connect: connection refused
	Mar 19 20:24:15 pause-746219 kubelet[3153]: I0319 20:24:15.520666    3153 kubelet_node_status.go:73] "Attempting to register node" node="pause-746219"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.560175    3153 kubelet_node_status.go:112] "Node was previously registered" node="pause-746219"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.560793    3153 kubelet_node_status.go:76] "Successfully registered node" node="pause-746219"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.562647    3153 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.563972    3153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.998284    3153 apiserver.go:52] "Watching apiserver"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.002906    3153 topology_manager.go:215] "Topology Admit Handler" podUID="b061b790-e6e7-4ed9-9b30-edf71179954b" podNamespace="kube-system" podName="coredns-76f75df574-df6fq"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.003186    3153 topology_manager.go:215] "Topology Admit Handler" podUID="ac3bbf7f-db46-4da0-aeee-b105b9202f35" podNamespace="kube-system" podName="kube-proxy-dtc7z"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.005481    3153 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.093096    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac3bbf7f-db46-4da0-aeee-b105b9202f35-lib-modules\") pod \"kube-proxy-dtc7z\" (UID: \"ac3bbf7f-db46-4da0-aeee-b105b9202f35\") " pod="kube-system/kube-proxy-dtc7z"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.093212    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac3bbf7f-db46-4da0-aeee-b105b9202f35-xtables-lock\") pod \"kube-proxy-dtc7z\" (UID: \"ac3bbf7f-db46-4da0-aeee-b105b9202f35\") " pod="kube-system/kube-proxy-dtc7z"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.304081    3153 scope.go:117] "RemoveContainer" containerID="cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.306894    3153 scope.go:117] "RemoveContainer" containerID="8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-746219 -n pause-746219
helpers_test.go:261: (dbg) Run:  kubectl --context pause-746219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-746219 -n pause-746219
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-746219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-746219 logs -n 25: (1.784197021s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-378078 sudo docker                         | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo cat                            | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo                                | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo find                           | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-378078 sudo crio                           | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-378078                                     | cilium-378078             | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	| start   | -p force-systemd-env-587385                          | force-systemd-env-587385  | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-746219                                      | pause-746219              | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:24 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-910871 ssh cat                    | force-systemd-flag-910871 | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-910871                         | force-systemd-flag-910871 | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC | 19 Mar 24 20:23 UTC |
	| start   | -p cert-options-346618                               | cert-options-346618       | jenkins | v1.32.0 | 19 Mar 24 20:23 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-853797                         | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p kubernetes-upgrade-853797                         | kubernetes-upgrade-853797 | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-587385                          | force-systemd-env-587385  | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:24 UTC |
	| start   | -p cert-expiration-428153                            | cert-expiration-428153    | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:24:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:24:23.725286   55554 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:24:23.725407   55554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:24:23.725412   55554 out.go:304] Setting ErrFile to fd 2...
	I0319 20:24:23.725415   55554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:24:23.725619   55554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:24:23.726137   55554 out.go:298] Setting JSON to false
	I0319 20:24:23.727023   55554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7562,"bootTime":1710872302,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:24:23.727071   55554 start.go:139] virtualization: kvm guest
	I0319 20:24:23.729761   55554 out.go:177] * [cert-expiration-428153] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:24:23.731532   55554 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:24:23.731496   55554 notify.go:220] Checking for updates...
	I0319 20:24:23.733243   55554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:24:23.734952   55554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:24:23.736453   55554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:24:23.737895   55554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:24:23.739279   55554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:24:23.741400   55554 config.go:182] Loaded profile config "cert-options-346618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:23.741477   55554 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:24:23.741585   55554 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:23.741660   55554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:24:23.778255   55554 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:24:23.779935   55554 start.go:297] selected driver: kvm2
	I0319 20:24:23.779942   55554 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:24:23.779952   55554 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:24:23.780733   55554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:24:23.780801   55554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:24:23.795716   55554 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:24:23.795754   55554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 20:24:23.795974   55554 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 20:24:23.796023   55554 cni.go:84] Creating CNI manager for ""
	I0319 20:24:23.796031   55554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:24:23.796044   55554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 20:24:23.796086   55554 start.go:340] cluster config:
	{Name:cert-expiration-428153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-expiration-428153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:24:23.796161   55554 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:24:23.798527   55554 out.go:177] * Starting "cert-expiration-428153" primary control-plane node in "cert-expiration-428153" cluster
	I0319 20:24:20.224580   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:20.225071   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find current IP address of domain cert-options-346618 in network mk-cert-options-346618
	I0319 20:24:20.225087   55088 main.go:141] libmachine: (cert-options-346618) DBG | I0319 20:24:20.225019   55186 retry.go:31] will retry after 2.772249273s: waiting for machine to come up
	I0319 20:24:22.998132   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:23.013869   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find current IP address of domain cert-options-346618 in network mk-cert-options-346618
	I0319 20:24:23.013888   55088 main.go:141] libmachine: (cert-options-346618) DBG | I0319 20:24:23.013801   55186 retry.go:31] will retry after 3.666703177s: waiting for machine to come up
	I0319 20:24:22.516500   54890 pod_ready.go:102] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:24.521252   54890 pod_ready.go:102] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:25.516083   54890 pod_ready.go:92] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:25.516105   54890 pod_ready.go:81] duration metric: took 5.006998774s for pod "etcd-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:25.516114   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:23.800189   55554 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:24:23.800228   55554 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:24:23.800233   55554 cache.go:56] Caching tarball of preloaded images
	I0319 20:24:23.800371   55554 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:24:23.800385   55554 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:24:23.800481   55554 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/config.json ...
	I0319 20:24:23.800503   55554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-expiration-428153/config.json: {Name:mk87e481ad903f92be7d6a0d22d14bb92d36dbbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:23.800665   55554 start.go:360] acquireMachinesLock for cert-expiration-428153: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:24:26.682167   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:26.682722   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find current IP address of domain cert-options-346618 in network mk-cert-options-346618
	I0319 20:24:26.682739   55088 main.go:141] libmachine: (cert-options-346618) DBG | I0319 20:24:26.682672   55186 retry.go:31] will retry after 4.107731031s: waiting for machine to come up
	I0319 20:24:27.523894   54890 pod_ready.go:102] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:30.023918   54890 pod_ready.go:102] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:32.373812   55361 start.go:364] duration metric: took 19.03028141s to acquireMachinesLock for "kubernetes-upgrade-853797"
	I0319 20:24:32.373861   55361 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:24:32.373869   55361 fix.go:54] fixHost starting: 
	I0319 20:24:32.374258   55361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:24:32.374308   55361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:24:32.392058   55361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0319 20:24:32.392515   55361 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:24:32.392999   55361 main.go:141] libmachine: Using API Version  1
	I0319 20:24:32.393018   55361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:24:32.393362   55361 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:24:32.393576   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	I0319 20:24:32.393722   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .GetState
	I0319 20:24:32.395582   55361 fix.go:112] recreateIfNeeded on kubernetes-upgrade-853797: state=Stopped err=<nil>
	I0319 20:24:32.395608   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .DriverName
	W0319 20:24:32.395761   55361 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:24:32.398045   55361 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-853797" ...
	I0319 20:24:32.399697   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Calling .Start
	I0319 20:24:32.399869   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring networks are active...
	I0319 20:24:32.400624   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring network default is active
	I0319 20:24:32.401021   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Ensuring network mk-kubernetes-upgrade-853797 is active
	I0319 20:24:32.401447   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Getting domain xml...
	I0319 20:24:32.402186   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Creating domain...
	I0319 20:24:30.794840   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.795342   55088 main.go:141] libmachine: (cert-options-346618) Found IP for machine: 192.168.61.123
	I0319 20:24:30.795360   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has current primary IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.795380   55088 main.go:141] libmachine: (cert-options-346618) Reserving static IP address...
	I0319 20:24:30.795827   55088 main.go:141] libmachine: (cert-options-346618) DBG | unable to find host DHCP lease matching {name: "cert-options-346618", mac: "52:54:00:0e:ec:75", ip: "192.168.61.123"} in network mk-cert-options-346618
	I0319 20:24:30.873231   55088 main.go:141] libmachine: (cert-options-346618) DBG | Getting to WaitForSSH function...
	I0319 20:24:30.873253   55088 main.go:141] libmachine: (cert-options-346618) Reserved static IP address: 192.168.61.123
	I0319 20:24:30.873266   55088 main.go:141] libmachine: (cert-options-346618) Waiting for SSH to be available...
	I0319 20:24:30.876571   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.877021   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:30.877043   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:30.877165   55088 main.go:141] libmachine: (cert-options-346618) DBG | Using SSH client type: external
	I0319 20:24:30.877188   55088 main.go:141] libmachine: (cert-options-346618) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa (-rw-------)
	I0319 20:24:30.877228   55088 main.go:141] libmachine: (cert-options-346618) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:24:30.877242   55088 main.go:141] libmachine: (cert-options-346618) DBG | About to run SSH command:
	I0319 20:24:30.877254   55088 main.go:141] libmachine: (cert-options-346618) DBG | exit 0
	I0319 20:24:31.009165   55088 main.go:141] libmachine: (cert-options-346618) DBG | SSH cmd err, output: <nil>: 
	I0319 20:24:31.009486   55088 main.go:141] libmachine: (cert-options-346618) KVM machine creation complete!
	I0319 20:24:31.009840   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetConfigRaw
	I0319 20:24:31.010460   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:31.010662   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:31.010842   55088 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:24:31.010852   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetState
	I0319 20:24:31.012040   55088 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:24:31.012048   55088 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:24:31.012054   55088 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:24:31.012062   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.014427   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.014882   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.014918   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.015028   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.015206   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.015375   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.015493   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.015625   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.015865   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.015874   55088 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:24:31.123929   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:24:31.123943   55088 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:24:31.123951   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.126736   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.127071   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.127083   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.127242   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.127434   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.127591   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.127738   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.127863   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.128048   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.128053   55088 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:24:31.237297   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:24:31.237363   55088 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:24:31.237368   55088 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:24:31.237375   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetMachineName
	I0319 20:24:31.237622   55088 buildroot.go:166] provisioning hostname "cert-options-346618"
	I0319 20:24:31.237640   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetMachineName
	I0319 20:24:31.237832   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.240306   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.240687   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.240709   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.240865   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.241043   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.241204   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.241344   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.241535   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.241685   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.241692   55088 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-346618 && echo "cert-options-346618" | sudo tee /etc/hostname
	I0319 20:24:31.368791   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-346618
	
	I0319 20:24:31.368807   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.371327   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.371637   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.371647   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.371907   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.372075   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.372202   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.372349   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.372505   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.372678   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.372694   55088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-346618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-346618/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-346618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:24:31.496816   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:24:31.496834   55088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:24:31.496849   55088 buildroot.go:174] setting up certificates
	I0319 20:24:31.496857   55088 provision.go:84] configureAuth start
	I0319 20:24:31.496864   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetMachineName
	I0319 20:24:31.497119   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:31.500356   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.500742   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.500764   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.500930   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.503447   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.503760   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.503780   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.503936   55088 provision.go:143] copyHostCerts
	I0319 20:24:31.503995   55088 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:24:31.504000   55088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:24:31.504051   55088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:24:31.504127   55088 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:24:31.504130   55088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:24:31.504152   55088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:24:31.504193   55088 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:24:31.504196   55088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:24:31.504221   55088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:24:31.504293   55088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.cert-options-346618 san=[127.0.0.1 192.168.61.123 cert-options-346618 localhost minikube]
	I0319 20:24:31.653767   55088 provision.go:177] copyRemoteCerts
	I0319 20:24:31.653808   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:24:31.653837   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.656283   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.656611   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.656634   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.656815   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.657008   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.657137   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.657279   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:31.743193   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:24:31.772267   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:24:31.799828   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:24:31.827928   55088 provision.go:87] duration metric: took 331.061574ms to configureAuth
	I0319 20:24:31.827943   55088 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:24:31.828091   55088 config.go:182] Loaded profile config "cert-options-346618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:31.828148   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:31.831103   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.831496   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:31.831521   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:31.831707   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:31.831890   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.832059   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:31.832200   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:31.832390   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:31.832550   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:31.832563   55088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:24:32.116498   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:24:32.116516   55088 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:24:32.116525   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetURL
	I0319 20:24:32.117984   55088 main.go:141] libmachine: (cert-options-346618) DBG | Using libvirt version 6000000
	I0319 20:24:32.120296   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.120683   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.120708   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.120877   55088 main.go:141] libmachine: Docker is up and running!
	I0319 20:24:32.120887   55088 main.go:141] libmachine: Reticulating splines...
	I0319 20:24:32.120893   55088 client.go:171] duration metric: took 25.731119186s to LocalClient.Create
	I0319 20:24:32.120917   55088 start.go:167] duration metric: took 25.731197527s to libmachine.API.Create "cert-options-346618"
	I0319 20:24:32.120922   55088 start.go:293] postStartSetup for "cert-options-346618" (driver="kvm2")
	I0319 20:24:32.120930   55088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:24:32.120942   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.121163   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:24:32.121181   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.123533   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.123877   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.123900   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.124080   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.124229   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.124380   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.124522   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:32.211256   55088 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:24:32.216107   55088 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:24:32.216126   55088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:24:32.216181   55088 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:24:32.216253   55088 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:24:32.216412   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:24:32.226216   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:24:32.253707   55088 start.go:296] duration metric: took 132.774704ms for postStartSetup
	I0319 20:24:32.253762   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetConfigRaw
	I0319 20:24:32.254328   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:32.257045   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.257388   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.257403   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.257661   55088 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/config.json ...
	I0319 20:24:32.257822   55088 start.go:128] duration metric: took 25.892205293s to createHost
	I0319 20:24:32.257842   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.260124   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.260503   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.260525   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.260618   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.260810   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.260984   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.261109   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.261257   55088 main.go:141] libmachine: Using SSH client type: native
	I0319 20:24:32.261425   55088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0319 20:24:32.261430   55088 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:24:32.373639   55088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879872.313453987
	
	I0319 20:24:32.373653   55088 fix.go:216] guest clock: 1710879872.313453987
	I0319 20:24:32.373663   55088 fix.go:229] Guest: 2024-03-19 20:24:32.313453987 +0000 UTC Remote: 2024-03-19 20:24:32.25783231 +0000 UTC m=+37.571314777 (delta=55.621677ms)
	I0319 20:24:32.373726   55088 fix.go:200] guest clock delta is within tolerance: 55.621677ms
	I0319 20:24:32.373736   55088 start.go:83] releasing machines lock for "cert-options-346618", held for 26.008280751s
	I0319 20:24:32.373772   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.374097   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:32.376955   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.377373   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.377418   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.377544   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.378220   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.378397   55088 main.go:141] libmachine: (cert-options-346618) Calling .DriverName
	I0319 20:24:32.378471   55088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:24:32.378514   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.378607   55088 ssh_runner.go:195] Run: cat /version.json
	I0319 20:24:32.378622   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHHostname
	I0319 20:24:32.381296   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.381631   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.381658   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.381675   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.381837   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.382006   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.382175   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.382181   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:32.382200   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:32.382319   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:32.382351   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHPort
	I0319 20:24:32.382486   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHKeyPath
	I0319 20:24:32.382640   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetSSHUsername
	I0319 20:24:32.382810   55088 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/cert-options-346618/id_rsa Username:docker}
	I0319 20:24:32.470499   55088 ssh_runner.go:195] Run: systemctl --version
	I0319 20:24:32.496506   55088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:24:32.664452   55088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:24:32.672787   55088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:24:32.672849   55088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:24:32.691540   55088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:24:32.691552   55088 start.go:494] detecting cgroup driver to use...
	I0319 20:24:32.691603   55088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:24:32.710523   55088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:24:32.728372   55088 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:24:32.728425   55088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:24:32.745168   55088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:24:32.763556   55088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:24:32.895708   55088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:24:33.054418   55088 docker.go:233] disabling docker service ...
	I0319 20:24:33.054462   55088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:24:33.074599   55088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:24:33.092858   55088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:24:33.261128   55088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:24:33.425464   55088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:24:33.443581   55088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:24:33.467192   55088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:24:33.467273   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.480401   55088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:24:33.480471   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.492472   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.504380   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.516746   55088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:24:33.530157   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.542865   55088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.563632   55088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:24:33.576934   55088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:24:33.587765   55088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:24:33.587808   55088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:24:33.603546   55088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:24:33.615406   55088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:33.769572   55088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:24:33.925026   55088 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:24:33.925097   55088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:24:33.931181   55088 start.go:562] Will wait 60s for crictl version
	I0319 20:24:33.931249   55088 ssh_runner.go:195] Run: which crictl
	I0319 20:24:33.935984   55088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:24:33.984596   55088 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:24:33.984677   55088 ssh_runner.go:195] Run: crio --version
	I0319 20:24:34.024124   55088 ssh_runner.go:195] Run: crio --version
	I0319 20:24:34.062129   55088 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:24:32.530081   54890 pod_ready.go:102] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"False"
	I0319 20:24:34.025904   54890 pod_ready.go:92] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.025933   54890 pod_ready.go:81] duration metric: took 8.509811287s for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.025947   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.535319   54890 pod_ready.go:92] pod "kube-controller-manager-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.535352   54890 pod_ready.go:81] duration metric: took 509.395321ms for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.535373   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.541859   54890 pod_ready.go:92] pod "kube-proxy-dtc7z" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.541893   54890 pod_ready.go:81] duration metric: took 6.5098ms for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.541906   54890 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.548341   54890 pod_ready.go:92] pod "kube-scheduler-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.548369   54890 pod_ready.go:81] duration metric: took 6.452173ms for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.548380   54890 pod_ready.go:38] duration metric: took 14.056332754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:24:34.548402   54890 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:24:34.566095   54890 ops.go:34] apiserver oom_adj: -16
	I0319 20:24:34.566115   54890 kubeadm.go:591] duration metric: took 22.688476668s to restartPrimaryControlPlane
	I0319 20:24:34.566127   54890 kubeadm.go:393] duration metric: took 22.837053133s to StartCluster
	I0319 20:24:34.566145   54890 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:34.566216   54890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:24:34.566971   54890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:34.567197   54890 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:24:34.569177   54890 out.go:177] * Verifying Kubernetes components...
	I0319 20:24:34.567273   54890 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:24:34.567441   54890 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:34.570726   54890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:34.572318   54890 out.go:177] * Enabled addons: 
	I0319 20:24:34.063439   55088 main.go:141] libmachine: (cert-options-346618) Calling .GetIP
	I0319 20:24:34.066508   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:34.066874   55088 main.go:141] libmachine: (cert-options-346618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:ec:75", ip: ""} in network mk-cert-options-346618: {Iface:virbr3 ExpiryTime:2024-03-19 21:24:23 +0000 UTC Type:0 Mac:52:54:00:0e:ec:75 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:cert-options-346618 Clientid:01:52:54:00:0e:ec:75}
	I0319 20:24:34.066893   55088 main.go:141] libmachine: (cert-options-346618) DBG | domain cert-options-346618 has defined IP address 192.168.61.123 and MAC address 52:54:00:0e:ec:75 in network mk-cert-options-346618
	I0319 20:24:34.067112   55088 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:24:34.072986   55088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:24:34.091336   55088 kubeadm.go:877] updating cluster {Name:cert-options-346618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:cert-options-346618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:24:34.091431   55088 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:24:34.091498   55088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:24:34.125633   55088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:24:34.125681   55088 ssh_runner.go:195] Run: which lz4
	I0319 20:24:34.130354   55088 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:24:34.135446   55088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:24:34.135469   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:24:34.573798   54890 addons.go:505] duration metric: took 6.527447ms for enable addons: enabled=[]
	I0319 20:24:34.798284   54890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:24:34.821845   54890 node_ready.go:35] waiting up to 6m0s for node "pause-746219" to be "Ready" ...
	I0319 20:24:34.826733   54890 node_ready.go:49] node "pause-746219" has status "Ready":"True"
	I0319 20:24:34.826758   54890 node_ready.go:38] duration metric: took 4.877005ms for node "pause-746219" to be "Ready" ...
	I0319 20:24:34.826770   54890 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:24:34.833425   54890 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-df6fq" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.841881   54890 pod_ready.go:92] pod "coredns-76f75df574-df6fq" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:34.841909   54890 pod_ready.go:81] duration metric: took 8.455556ms for pod "coredns-76f75df574-df6fq" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:34.841918   54890 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.222339   54890 pod_ready.go:92] pod "etcd-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:35.222369   54890 pod_ready.go:81] duration metric: took 380.439665ms for pod "etcd-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.222380   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.622646   54890 pod_ready.go:92] pod "kube-apiserver-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:35.623836   54890 pod_ready.go:81] duration metric: took 401.441624ms for pod "kube-apiserver-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:35.623857   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.022312   54890 pod_ready.go:92] pod "kube-controller-manager-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:36.022338   54890 pod_ready.go:81] duration metric: took 398.472219ms for pod "kube-controller-manager-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.022353   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.421495   54890 pod_ready.go:92] pod "kube-proxy-dtc7z" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:36.421520   54890 pod_ready.go:81] duration metric: took 399.159369ms for pod "kube-proxy-dtc7z" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.421529   54890 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.820855   54890 pod_ready.go:92] pod "kube-scheduler-pause-746219" in "kube-system" namespace has status "Ready":"True"
	I0319 20:24:36.820887   54890 pod_ready.go:81] duration metric: took 399.350816ms for pod "kube-scheduler-pause-746219" in "kube-system" namespace to be "Ready" ...
	I0319 20:24:36.820899   54890 pod_ready.go:38] duration metric: took 1.994117795s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:24:36.820917   54890 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:24:36.820979   54890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:24:36.837674   54890 api_server.go:72] duration metric: took 2.270445023s to wait for apiserver process to appear ...
	I0319 20:24:36.837702   54890 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:24:36.837724   54890 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0319 20:24:36.842306   54890 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0319 20:24:36.843919   54890 api_server.go:141] control plane version: v1.29.3
	I0319 20:24:36.843938   54890 api_server.go:131] duration metric: took 6.228808ms to wait for apiserver health ...
	I0319 20:24:36.843949   54890 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:24:37.023755   54890 system_pods.go:59] 6 kube-system pods found
	I0319 20:24:37.023787   54890 system_pods.go:61] "coredns-76f75df574-df6fq" [b061b790-e6e7-4ed9-9b30-edf71179954b] Running
	I0319 20:24:37.023793   54890 system_pods.go:61] "etcd-pause-746219" [f3194da2-cfb9-4527-89b9-5661373ac7a4] Running
	I0319 20:24:37.023797   54890 system_pods.go:61] "kube-apiserver-pause-746219" [8ed5bea1-554a-47ca-8ea7-94b2e1ff0e8f] Running
	I0319 20:24:37.023802   54890 system_pods.go:61] "kube-controller-manager-pause-746219" [21851eec-9279-4e5e-904f-23bf8c796279] Running
	I0319 20:24:37.023806   54890 system_pods.go:61] "kube-proxy-dtc7z" [ac3bbf7f-db46-4da0-aeee-b105b9202f35] Running
	I0319 20:24:37.023810   54890 system_pods.go:61] "kube-scheduler-pause-746219" [be26e770-cf88-47f9-94e2-015c782a89dc] Running
	I0319 20:24:37.023819   54890 system_pods.go:74] duration metric: took 179.863163ms to wait for pod list to return data ...
	I0319 20:24:37.023829   54890 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:24:37.221008   54890 default_sa.go:45] found service account: "default"
	I0319 20:24:37.221040   54890 default_sa.go:55] duration metric: took 197.204319ms for default service account to be created ...
	I0319 20:24:37.221051   54890 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:24:37.426742   54890 system_pods.go:86] 6 kube-system pods found
	I0319 20:24:37.426779   54890 system_pods.go:89] "coredns-76f75df574-df6fq" [b061b790-e6e7-4ed9-9b30-edf71179954b] Running
	I0319 20:24:37.426788   54890 system_pods.go:89] "etcd-pause-746219" [f3194da2-cfb9-4527-89b9-5661373ac7a4] Running
	I0319 20:24:37.426794   54890 system_pods.go:89] "kube-apiserver-pause-746219" [8ed5bea1-554a-47ca-8ea7-94b2e1ff0e8f] Running
	I0319 20:24:37.426801   54890 system_pods.go:89] "kube-controller-manager-pause-746219" [21851eec-9279-4e5e-904f-23bf8c796279] Running
	I0319 20:24:37.426807   54890 system_pods.go:89] "kube-proxy-dtc7z" [ac3bbf7f-db46-4da0-aeee-b105b9202f35] Running
	I0319 20:24:37.426813   54890 system_pods.go:89] "kube-scheduler-pause-746219" [be26e770-cf88-47f9-94e2-015c782a89dc] Running
	I0319 20:24:37.426823   54890 system_pods.go:126] duration metric: took 205.763828ms to wait for k8s-apps to be running ...
	I0319 20:24:37.426836   54890 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:24:37.426891   54890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:24:37.445840   54890 system_svc.go:56] duration metric: took 18.99447ms WaitForService to wait for kubelet
	I0319 20:24:37.445869   54890 kubeadm.go:576] duration metric: took 2.878643311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:24:37.445890   54890 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:24:37.620906   54890 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:24:37.620936   54890 node_conditions.go:123] node cpu capacity is 2
	I0319 20:24:37.620951   54890 node_conditions.go:105] duration metric: took 175.054177ms to run NodePressure ...
	I0319 20:24:37.620967   54890 start.go:240] waiting for startup goroutines ...
	I0319 20:24:37.620978   54890 start.go:245] waiting for cluster config update ...
	I0319 20:24:37.620992   54890 start.go:254] writing updated cluster config ...
	I0319 20:24:37.621375   54890 ssh_runner.go:195] Run: rm -f paused
	I0319 20:24:37.680126   54890 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:24:37.682483   54890 out.go:177] * Done! kubectl is now configured to use "pause-746219" cluster and "default" namespace by default
	I0319 20:24:33.689114   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) Waiting to get IP...
	I0319 20:24:33.690157   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:33.690725   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:33.690775   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:33.690662   55621 retry.go:31] will retry after 289.570846ms: waiting for machine to come up
	I0319 20:24:33.982368   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:33.983002   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:33.983035   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:33.982954   55621 retry.go:31] will retry after 374.598165ms: waiting for machine to come up
	I0319 20:24:34.359728   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:34.360278   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:34.360305   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:34.360177   55621 retry.go:31] will retry after 401.432721ms: waiting for machine to come up
	I0319 20:24:34.763863   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:34.764455   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:34.764487   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:34.764391   55621 retry.go:31] will retry after 486.944509ms: waiting for machine to come up
	I0319 20:24:35.252838   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:35.253468   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:35.253494   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:35.253405   55621 retry.go:31] will retry after 625.725966ms: waiting for machine to come up
	I0319 20:24:35.880831   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:35.881368   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:35.881416   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:35.881329   55621 retry.go:31] will retry after 649.013564ms: waiting for machine to come up
	I0319 20:24:36.532458   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:36.533030   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:36.533073   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:36.532974   55621 retry.go:31] will retry after 1.010336733s: waiting for machine to come up
	I0319 20:24:37.545241   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:37.545799   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:37.545830   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:37.545723   55621 retry.go:31] will retry after 1.128433955s: waiting for machine to come up
	I0319 20:24:35.945521   55088 crio.go:462] duration metric: took 1.815214441s to copy over tarball
	I0319 20:24:35.945611   55088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:24:38.563935   55088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.618252507s)
	I0319 20:24:38.563953   55088 crio.go:469] duration metric: took 2.618415726s to extract the tarball
	I0319 20:24:38.563959   55088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:24:38.603079   55088 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:24:38.667070   55088 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:24:38.667083   55088 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:24:38.667091   55088 kubeadm.go:928] updating node { 192.168.61.123 8555 v1.29.3 crio true true} ...
	I0319 20:24:38.667238   55088 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-346618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:cert-options-346618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:24:38.667325   55088 ssh_runner.go:195] Run: crio config
	I0319 20:24:38.727536   55088 cni.go:84] Creating CNI manager for ""
	I0319 20:24:38.727551   55088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:24:38.727564   55088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:24:38.727596   55088 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8555 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-346618 NodeName:cert-options-346618 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:24:38.727766   55088 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-346618"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:24:38.727839   55088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:24:38.740777   55088 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:24:38.740848   55088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:24:38.755620   55088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0319 20:24:38.777701   55088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:24:38.799691   55088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0319 20:24:38.821778   55088 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0319 20:24:38.827770   55088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:24:38.851572   55088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:24:39.009938   55088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:24:39.035070   55088 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618 for IP: 192.168.61.123
	I0319 20:24:39.035082   55088 certs.go:194] generating shared ca certs ...
	I0319 20:24:39.035101   55088 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:39.035265   55088 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:24:39.035313   55088 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:24:39.035320   55088 certs.go:256] generating profile certs ...
	I0319 20:24:39.035420   55088 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/client.key
	I0319 20:24:39.035437   55088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/client.crt with IP's: []
	I0319 20:24:39.427702   55088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/client.crt ...
	I0319 20:24:39.427721   55088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/client.crt: {Name:mk0d96d503f51cf9f99aa9f65a40b2116d5053eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:39.455057   55088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/client.key ...
	I0319 20:24:39.455082   55088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/client.key: {Name:mk2ec6be350f6dbce6a18eedb8214ebaff290fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:39.455249   55088 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.key.c45fe30a
	I0319 20:24:39.455276   55088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.crt.c45fe30a with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.123]
	I0319 20:24:39.760314   55088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.crt.c45fe30a ...
	I0319 20:24:39.880193   55088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.crt.c45fe30a: {Name:mk6dce58d0ce29f6a6e0501ee24511b3626cf7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:39.880415   55088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.key.c45fe30a ...
	I0319 20:24:39.880426   55088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.key.c45fe30a: {Name:mka3b559dd87ffbf0df92f74465242eef7702d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:39.880538   55088 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.crt.c45fe30a -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.crt
	I0319 20:24:39.880652   55088 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.key.c45fe30a -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.key
	I0319 20:24:39.880717   55088 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.key
	I0319 20:24:39.880735   55088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.crt with IP's: []
	I0319 20:24:40.097328   55088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.crt ...
	I0319 20:24:40.097348   55088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.crt: {Name:mkdfe40dbf41a23964d13dc9748ec10d0ea84178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:40.097523   55088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.key ...
	I0319 20:24:40.097532   55088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.key: {Name:mk63554e8d8c997b15e5383db5dd061f90f211e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:40.097689   55088 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:24:40.097728   55088 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:24:40.097734   55088 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:24:40.097758   55088 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:24:40.097775   55088 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:24:40.097796   55088 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:24:40.097825   55088 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:24:40.099940   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:24:40.129420   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:24:40.158776   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:24:40.190710   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:24:40.218319   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0319 20:24:40.247744   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:24:40.275586   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:24:40.302648   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/cert-options-346618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 20:24:40.330132   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:24:40.359064   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:24:40.387965   55088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:24:40.418148   55088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:24:40.439523   55088 ssh_runner.go:195] Run: openssl version
	I0319 20:24:40.446083   55088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:24:40.458897   55088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:24:40.464309   55088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:24:40.464343   55088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:24:40.471926   55088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:24:40.485741   55088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:24:40.500721   55088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:24:40.506319   55088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:24:40.506378   55088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:24:40.513323   55088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:24:40.526642   55088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:24:40.539615   55088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:24:40.545383   55088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:24:40.545429   55088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:24:40.552168   55088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:24:40.565632   55088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:24:40.570478   55088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 20:24:40.570526   55088 kubeadm.go:391] StartCluster: {Name:cert-options-346618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:cert-options-346618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:24:40.570594   55088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:24:40.570646   55088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:24:40.626288   55088 cri.go:89] found id: ""
	I0319 20:24:40.626357   55088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 20:24:40.640530   55088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:24:40.677692   55088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:24:40.692470   55088 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:24:40.692482   55088 kubeadm.go:156] found existing configuration files:
	
	I0319 20:24:40.692540   55088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0319 20:24:40.711797   55088 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:24:40.711857   55088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:24:40.723654   55088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0319 20:24:40.735108   55088 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:24:40.735159   55088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:24:40.746784   55088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0319 20:24:40.758434   55088 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:24:40.758490   55088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:24:40.770039   55088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0319 20:24:40.780989   55088 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:24:40.781043   55088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:24:40.792334   55088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:24:40.898854   55088 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:24:40.898929   55088 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:24:41.031964   55088 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:24:41.032134   55088 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:24:41.032294   55088 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:24:41.262543   55088 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:24:38.676225   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:38.676771   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:38.676810   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:38.676701   55621 retry.go:31] will retry after 1.287251545s: waiting for machine to come up
	I0319 20:24:39.965238   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:39.965756   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:39.965782   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:39.965704   55621 retry.go:31] will retry after 2.009455657s: waiting for machine to come up
	I0319 20:24:41.977201   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | domain kubernetes-upgrade-853797 has defined MAC address 52:54:00:39:a8:7f in network mk-kubernetes-upgrade-853797
	I0319 20:24:41.977708   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | unable to find current IP address of domain kubernetes-upgrade-853797 in network mk-kubernetes-upgrade-853797
	I0319 20:24:41.977741   55361 main.go:141] libmachine: (kubernetes-upgrade-853797) DBG | I0319 20:24:41.977666   55621 retry.go:31] will retry after 2.798925002s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.637868442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879883637838577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=798c6a7b-b5c5-4bb3-9ec6-83872802a02a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.638526587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b32093c4-b595-41c0-bca3-92ed34a2383a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.638588366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b32093c4-b595-41c0-bca3-92ed34a2383a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.639483750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b32093c4-b595-41c0-bca3-92ed34a2383a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.706425119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28dbdcb7-994e-485c-8d8c-7d000768db5f name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.706582495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28dbdcb7-994e-485c-8d8c-7d000768db5f name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.714263962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe21173b-0fd8-452b-9588-afbfadec06da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.715087679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879883715046220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe21173b-0fd8-452b-9588-afbfadec06da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.716104598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6539d713-d762-48da-89fb-c04c41e4aa7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.716220456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6539d713-d762-48da-89fb-c04c41e4aa7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.716607309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6539d713-d762-48da-89fb-c04c41e4aa7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.769521189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fdad450-7d16-46ec-834a-82baebb25dc3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.769625500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fdad450-7d16-46ec-834a-82baebb25dc3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.771372059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2263ec69-c4c2-4e16-a63d-f4a5e740e0fe name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.771811698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879883771786850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2263ec69-c4c2-4e16-a63d-f4a5e740e0fe name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.772911016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99d084d1-1a4a-47d1-9fbc-b390691cd9cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.772988891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99d084d1-1a4a-47d1-9fbc-b390691cd9cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.773268925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99d084d1-1a4a-47d1-9fbc-b390691cd9cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.820885449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09287f27-55d2-4b4a-808d-5132c9a3910e name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.820995820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09287f27-55d2-4b4a-808d-5132c9a3910e name=/runtime.v1.RuntimeService/Version
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.822201756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cad3640c-37be-416d-81ea-be85223b2042 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.822634112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710879883822611133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cad3640c-37be-416d-81ea-be85223b2042 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.823184277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5410f5eb-5b3b-41ba-b685-4fc48c6db8b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.823262385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5410f5eb-5b3b-41ba-b685-4fc48c6db8b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:24:43 pause-746219 crio[2674]: time="2024-03-19 20:24:43.823493215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08,PodSandboxId:2db53363d84da7eef1d13db45d73552b5789cc55b9924280e6b8f9335da2c323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710879859344072693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae,PodSandboxId:8d7bfa5a36ec2a0f09d2f958e6a90219631a836878f547aa55b4fd6eceec6536,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710879859344126747,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0,PodSandboxId:65e8e968acc8a84c22e7f702cd30e560429760badf6dcb26080da2a96fcf49e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710879854687886062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78e
b077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7,PodSandboxId:e9f70a383de70c323e1d78df6624e6f3d4b3f41848b89f33f46e2929742b11ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710879854695437841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b54
8a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785,PodSandboxId:3cff552da57d2aa0cb28b719d97136cba5cfbb9ac4082ce0ede3ad795fc9023f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710879854670992316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernete
s.container.hash: ec196d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a,PodSandboxId:8ae72d8342079b83f0b835e949a233279b1d3650d4af49a3b54b772adea26e18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710879854697408088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af,PodSandboxId:d4f301a947c37582b00a3875e1d1ad6b7f2516cf77521784a63579c5a9e03fb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1710879848541358502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dtc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac3bbf7f-db46-4da0-aeee-b105b9202f35,},Annotations:map[string]string{io.kubernetes.container.hash: ae80d39d
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e,PodSandboxId:d8b899667fd7423daf039a31646b021424c7e0539606aed7d68900b122b65e80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1710879848323261013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 015828164ebb3a003f49e90fef000fe0,},Annotations:map[string]string{io.kubernetes.container.hash: ec196d2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9,PodSandboxId:b7a9539febd1ccea5b02c22940ef8e5c3edac7caf1c8a041af7a792e80a04de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710879848326024909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be698cd9aca41bfb6299d91a22b548a1,},Annotations:map[string]string{io.kubernetes.container.hash: 550e9cf0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523,PodSandboxId:e89618feef78275c33ffbcc700414df97671bfff270ae7fda4739c6922d8c2cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1710879848191106331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a71cb0fb2f271718f665e384897f527e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e,PodSandboxId:4921aa607f46ad943103974f716cc4bae0639553892d34b2314284b2dd56038d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1710879847900370305,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-746219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81183abd78eb077b33ad2bf28f1ebfbf,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267,PodSandboxId:6fe3fa0f01841292c1accf633fbac2fc6ef76cedc1f3012a37daed40d64b9c93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710879823731972074,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-df6fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b061b790-e6e7-4ed9-9b30-edf71179954b,},Annotations:map[string]string{io.kubernetes.container.hash: 9109afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5410f5eb-5b3b-41ba-b685-4fc48c6db8b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	66dcce3b597e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago       Running             coredns                   1                   8d7bfa5a36ec2       coredns-76f75df574-df6fq
	e1d904b3263b7       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   24 seconds ago       Running             kube-proxy                2                   2db53363d84da       kube-proxy-dtc7z
	449b026f0e180       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   29 seconds ago       Running             kube-controller-manager   2                   8ae72d8342079       kube-controller-manager-pause-746219
	465cb7aa444ab       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   29 seconds ago       Running             kube-apiserver            2                   e9f70a383de70       kube-apiserver-pause-746219
	8259358ff5ffe       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   29 seconds ago       Running             kube-scheduler            2                   65e8e968acc8a       kube-scheduler-pause-746219
	42006b40a7bd7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago       Running             etcd                      2                   3cff552da57d2       etcd-pause-746219
	cb2b6f5b064b1       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   35 seconds ago       Exited              kube-proxy                1                   d4f301a947c37       kube-proxy-dtc7z
	80c4dff69ab18       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   35 seconds ago       Exited              kube-apiserver            1                   b7a9539febd1c       kube-apiserver-pause-746219
	c38e906f5bb79       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   35 seconds ago       Exited              etcd                      1                   d8b899667fd74       etcd-pause-746219
	4a6072285f8d3       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   35 seconds ago       Exited              kube-controller-manager   1                   e89618feef782       kube-controller-manager-pause-746219
	dc3fa53814725       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   36 seconds ago       Exited              kube-scheduler            1                   4921aa607f46a       kube-scheduler-pause-746219
	8b1ee006a1cea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   6fe3fa0f01841       coredns-76f75df574-df6fq
	
	
	==> coredns [66dcce3b597e8c1a7debacef364bb639cf863f2758f7d428fd812f75c76375ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46312 - 23797 "HINFO IN 3137644523788626685.2320026330751616152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013625958s
	
	
	==> coredns [8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49618 - 22365 "HINFO IN 4862001822609258994.6606096006039772862. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010424627s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-746219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-746219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=pause-746219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_23_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-746219
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:24:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:24:18 +0000   Tue, 19 Mar 2024 20:23:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    pause-746219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 665fdf75264a4242bde6dba7d945c435
	  System UUID:                665fdf75-264a-4242-bde6-dba7d945c435
	  Boot ID:                    765e1546-1b69-4f7e-ba92-833c29d2c0aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-df6fq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 etcd-pause-746219                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-pause-746219             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-pause-746219    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-dtc7z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-pause-746219             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-746219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-746219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-746219 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeReady                77s                kubelet          Node pause-746219 status is now: NodeReady
	  Normal  RegisteredNode           64s                node-controller  Node pause-746219 event: Registered Node pause-746219 in Controller
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node pause-746219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node pause-746219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)  kubelet          Node pause-746219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-746219 event: Registered Node pause-746219 in Controller
	
	
	==> dmesg <==
	[  +0.057624] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079152] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182769] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.149632] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.358581] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +5.037746] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059410] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.853174] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.546297] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.751758] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.072675] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.990183] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +0.130560] kauditd_printk_skb: 21 callbacks suppressed
	[Mar19 20:24] systemd-fstab-generator[2083]: Ignoring "noauto" option for root device
	[  +0.112670] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.090475] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.257144] systemd-fstab-generator[2136]: Ignoring "noauto" option for root device
	[  +0.218889] systemd-fstab-generator[2168]: Ignoring "noauto" option for root device
	[  +0.977782] systemd-fstab-generator[2464]: Ignoring "noauto" option for root device
	[  +2.177444] systemd-fstab-generator[2918]: Ignoring "noauto" option for root device
	[  +3.079391] systemd-fstab-generator[3146]: Ignoring "noauto" option for root device
	[  +0.080006] kauditd_printk_skb: 230 callbacks suppressed
	[  +5.608140] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.266167] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.936623] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	
	
	==> etcd [42006b40a7bd779622aa315e8b67631647464a3637725b88694065b489585785] <==
	{"level":"info","ts":"2024-03-19T20:24:15.092669Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:24:15.09487Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-19T20:24:15.095013Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:24:15.095074Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:24:15.095089Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:24:15.095287Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-19T20:24:15.095324Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-19T20:24:15.097641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b switched to configuration voters=(10945199911802443307)"}
	{"level":"info","ts":"2024-03-19T20:24:15.097833Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","added-peer-id":"97e52954629f162b","added-peer-peer-urls":["https://192.168.39.29:2380"]}
	{"level":"info","ts":"2024-03-19T20:24:15.098019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:24:15.09806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:24:16.34102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-19T20:24:16.341141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:24:16.341216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgPreVoteResp from 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-03-19T20:24:16.341264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.341297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgVoteResp from 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.341332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.341366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-19T20:24:16.344226Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:pause-746219 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:24:16.344419Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:24:16.347404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-19T20:24:16.347516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:24:16.353878Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:24:16.353929Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:24:16.372258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.29:2379"}
	
	
	==> etcd [c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e] <==
	
	
	==> kernel <==
	 20:24:44 up 1 min,  0 users,  load average: 0.65, 0.31, 0.12
	Linux pause-746219 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [465cb7aa444ab246ad46fb0bb6f41b262b2097cc2e9fb1d34c9ba394ad4712e7] <==
	I0319 20:24:18.325796       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0319 20:24:18.325836       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0319 20:24:18.325846       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0319 20:24:18.454592       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0319 20:24:18.480923       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0319 20:24:18.482357       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0319 20:24:18.482866       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0319 20:24:18.482920       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0319 20:24:18.482961       1 shared_informer.go:318] Caches are synced for configmaps
	I0319 20:24:18.482877       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0319 20:24:18.482887       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0319 20:24:18.490868       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 20:24:18.512328       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0319 20:24:18.512463       1 aggregator.go:165] initial CRD sync complete...
	I0319 20:24:18.512506       1 autoregister_controller.go:141] Starting autoregister controller
	I0319 20:24:18.512538       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0319 20:24:18.512568       1 cache.go:39] Caches are synced for autoregister controller
	I0319 20:24:19.285684       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 20:24:20.321715       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0319 20:24:20.335805       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0319 20:24:20.381898       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0319 20:24:20.448966       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 20:24:20.465710       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0319 20:24:30.698585       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 20:24:30.767454       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9] <==
	
	
	==> kube-controller-manager [449b026f0e180d9b3862f04fa221cc59520caa4011f4361710c637143fb7c91a] <==
	I0319 20:24:30.746882       1 shared_informer.go:318] Caches are synced for node
	I0319 20:24:30.746969       1 range_allocator.go:174] "Sending events to api server"
	I0319 20:24:30.747034       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0319 20:24:30.747066       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0319 20:24:30.747075       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0319 20:24:30.751923       1 shared_informer.go:318] Caches are synced for taint
	I0319 20:24:30.752046       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0319 20:24:30.752219       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-746219"
	I0319 20:24:30.752283       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0319 20:24:30.752617       1 event.go:376] "Event occurred" object="pause-746219" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-746219 event: Registered Node pause-746219 in Controller"
	I0319 20:24:30.757701       1 shared_informer.go:318] Caches are synced for ephemeral
	I0319 20:24:30.759016       1 shared_informer.go:318] Caches are synced for endpoint
	I0319 20:24:30.763787       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0319 20:24:30.774987       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0319 20:24:30.801866       1 shared_informer.go:318] Caches are synced for attach detach
	I0319 20:24:30.806853       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0319 20:24:30.855234       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 20:24:30.866104       1 shared_informer.go:318] Caches are synced for deployment
	I0319 20:24:30.869160       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0319 20:24:30.869409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="95.481µs"
	I0319 20:24:30.881496       1 shared_informer.go:318] Caches are synced for disruption
	I0319 20:24:30.906236       1 shared_informer.go:318] Caches are synced for resource quota
	I0319 20:24:31.282036       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 20:24:31.313618       1 shared_informer.go:318] Caches are synced for garbage collector
	I0319 20:24:31.313708       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523] <==
	
	
	==> kube-proxy [cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af] <==
	
	
	==> kube-proxy [e1d904b3263b762a238510c5520dfe1712e0370fceaa1f236c4d8927ac0b9d08] <==
	I0319 20:24:19.628717       1 server_others.go:72] "Using iptables proxy"
	I0319 20:24:19.661957       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.29"]
	I0319 20:24:19.740225       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:24:19.740276       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:24:19.740301       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:24:19.743398       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:24:19.743622       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:24:19.743666       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:24:19.750033       1 config.go:188] "Starting service config controller"
	I0319 20:24:19.750305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:24:19.750393       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:24:19.750423       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:24:19.752715       1 config.go:315] "Starting node config controller"
	I0319 20:24:19.752970       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:24:19.851376       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0319 20:24:19.851462       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:24:19.853664       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8259358ff5ffeec49856ee63d9500577d04322b4c4c81e4ec5f051ef14b233a0] <==
	I0319 20:24:18.430231       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 20:24:18.430284       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:24:18.438536       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 20:24:18.438700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 20:24:18.438832       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:24:18.439850       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0319 20:24:18.449848       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:24:18.449926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:24:18.450012       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:24:18.450044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 20:24:18.450112       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:24:18.450141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:24:18.450223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 20:24:18.451879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 20:24:18.452164       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 20:24:18.452206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0319 20:24:18.452315       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:24:18.452360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:24:18.452410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0319 20:24:18.452448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 20:24:18.452491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 20:24:18.452527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0319 20:24:18.452217       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 20:24:18.454894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0319 20:24:18.539049       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e] <==
	I0319 20:24:09.177132       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.415150    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71cb0fb2f271718f665e384897f527e-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-746219\" (UID: \"a71cb0fb2f271718f665e384897f527e\") " pod="kube-system/kube-controller-manager-pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.415169    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/015828164ebb3a003f49e90fef000fe0-etcd-data\") pod \"etcd-pause-746219\" (UID: \"015828164ebb3a003f49e90fef000fe0\") " pod="kube-system/etcd-pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.415185    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be698cd9aca41bfb6299d91a22b548a1-ca-certs\") pod \"kube-apiserver-pause-746219\" (UID: \"be698cd9aca41bfb6299d91a22b548a1\") " pod="kube-system/kube-apiserver-pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: E0319 20:24:14.612573    3153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-746219?timeout=10s\": dial tcp 192.168.39.29:8443: connect: connection refused" interval="800ms"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.635308    3153 scope.go:117] "RemoveContainer" containerID="c38e906f5bb7973b1ee3d29ee52e2926894b39cf0bf72fae21a4061e1f3e8a7e"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.636614    3153 scope.go:117] "RemoveContainer" containerID="80c4dff69ab1823df9d8974759b67d38817af3f2b01a8d6a520cecba76596ac9"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.638412    3153 scope.go:117] "RemoveContainer" containerID="4a6072285f8d305594e9ef6d382be0bc96a4a0af9f266ef7b2d6f15f849df523"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.639100    3153 scope.go:117] "RemoveContainer" containerID="dc3fa5381472517f37dc8654042cdfcddd941db481f857d26cc8686c3ed1c85e"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: I0319 20:24:14.717856    3153 kubelet_node_status.go:73] "Attempting to register node" node="pause-746219"
	Mar 19 20:24:14 pause-746219 kubelet[3153]: E0319 20:24:14.718839    3153 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.29:8443: connect: connection refused" node="pause-746219"
	Mar 19 20:24:15 pause-746219 kubelet[3153]: W0319 20:24:15.005864    3153 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.29:8443: connect: connection refused
	Mar 19 20:24:15 pause-746219 kubelet[3153]: E0319 20:24:15.005984    3153 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.29:8443: connect: connection refused
	Mar 19 20:24:15 pause-746219 kubelet[3153]: I0319 20:24:15.520666    3153 kubelet_node_status.go:73] "Attempting to register node" node="pause-746219"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.560175    3153 kubelet_node_status.go:112] "Node was previously registered" node="pause-746219"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.560793    3153 kubelet_node_status.go:76] "Successfully registered node" node="pause-746219"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.562647    3153 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.563972    3153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 19 20:24:18 pause-746219 kubelet[3153]: I0319 20:24:18.998284    3153 apiserver.go:52] "Watching apiserver"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.002906    3153 topology_manager.go:215] "Topology Admit Handler" podUID="b061b790-e6e7-4ed9-9b30-edf71179954b" podNamespace="kube-system" podName="coredns-76f75df574-df6fq"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.003186    3153 topology_manager.go:215] "Topology Admit Handler" podUID="ac3bbf7f-db46-4da0-aeee-b105b9202f35" podNamespace="kube-system" podName="kube-proxy-dtc7z"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.005481    3153 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.093096    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac3bbf7f-db46-4da0-aeee-b105b9202f35-lib-modules\") pod \"kube-proxy-dtc7z\" (UID: \"ac3bbf7f-db46-4da0-aeee-b105b9202f35\") " pod="kube-system/kube-proxy-dtc7z"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.093212    3153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac3bbf7f-db46-4da0-aeee-b105b9202f35-xtables-lock\") pod \"kube-proxy-dtc7z\" (UID: \"ac3bbf7f-db46-4da0-aeee-b105b9202f35\") " pod="kube-system/kube-proxy-dtc7z"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.304081    3153 scope.go:117] "RemoveContainer" containerID="cb2b6f5b064b1c5ebe6ee04038a165f3a962c97e132542fe85ee18e3ee0188af"
	Mar 19 20:24:19 pause-746219 kubelet[3153]: I0319 20:24:19.306894    3153 scope.go:117] "RemoveContainer" containerID="8b1ee006a1cea4ac9eb3c3fbd5a94e97991fce5af6dd4f8cf89c84b668ffc267"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-746219 -n pause-746219
helpers_test.go:261: (dbg) Run:  kubectl --context pause-746219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (311.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-159022 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-159022 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m11.231209706s)

                                                
                                                
-- stdout --
	* [old-k8s-version-159022] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-159022" primary control-plane node in "old-k8s-version-159022" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:24:46.331797   55982 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:24:46.331931   55982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:24:46.331944   55982 out.go:304] Setting ErrFile to fd 2...
	I0319 20:24:46.331960   55982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:24:46.332662   55982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:24:46.333546   55982 out.go:298] Setting JSON to false
	I0319 20:24:46.335054   55982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7584,"bootTime":1710872302,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:24:46.335213   55982 start.go:139] virtualization: kvm guest
	I0319 20:24:46.337616   55982 out.go:177] * [old-k8s-version-159022] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:24:46.339583   55982 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:24:46.340895   55982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:24:46.339625   55982 notify.go:220] Checking for updates...
	I0319 20:24:46.343386   55982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:24:46.344952   55982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:24:46.346373   55982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:24:46.347682   55982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:24:46.349542   55982 config.go:182] Loaded profile config "cert-expiration-428153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:46.349683   55982 config.go:182] Loaded profile config "cert-options-346618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:24:46.349798   55982 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:24:46.349934   55982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:24:46.386278   55982 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:24:46.387877   55982 start.go:297] selected driver: kvm2
	I0319 20:24:46.387892   55982 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:24:46.387906   55982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:24:46.388979   55982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:24:46.389052   55982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:24:46.405094   55982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:24:46.405146   55982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 20:24:46.405417   55982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:24:46.405498   55982 cni.go:84] Creating CNI manager for ""
	I0319 20:24:46.405516   55982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:24:46.405528   55982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 20:24:46.405600   55982 start.go:340] cluster config:
	{Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:24:46.405718   55982 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:24:46.407838   55982 out.go:177] * Starting "old-k8s-version-159022" primary control-plane node in "old-k8s-version-159022" cluster
	I0319 20:24:46.409431   55982 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:24:46.409470   55982 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0319 20:24:46.409482   55982 cache.go:56] Caching tarball of preloaded images
	I0319 20:24:46.409583   55982 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:24:46.409596   55982 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0319 20:24:46.409719   55982 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:24:46.409742   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json: {Name:mkb91457dccfd07540af13c7977b564d34007a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:24:46.409887   55982 start.go:360] acquireMachinesLock for old-k8s-version-159022: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:25:20.013507   55982 start.go:364] duration metric: took 33.603586686s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:25:20.013579   55982 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:25:20.013715   55982 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 20:25:20.016016   55982 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 20:25:20.016216   55982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:25:20.016272   55982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:25:20.032911   55982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0319 20:25:20.033334   55982 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:25:20.033953   55982 main.go:141] libmachine: Using API Version  1
	I0319 20:25:20.033981   55982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:25:20.034279   55982 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:25:20.034454   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:20.034596   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:20.034732   55982 start.go:159] libmachine.API.Create for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:25:20.034761   55982 client.go:168] LocalClient.Create starting
	I0319 20:25:20.034790   55982 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 20:25:20.034822   55982 main.go:141] libmachine: Decoding PEM data...
	I0319 20:25:20.034836   55982 main.go:141] libmachine: Parsing certificate...
	I0319 20:25:20.034885   55982 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 20:25:20.034903   55982 main.go:141] libmachine: Decoding PEM data...
	I0319 20:25:20.034914   55982 main.go:141] libmachine: Parsing certificate...
	I0319 20:25:20.034930   55982 main.go:141] libmachine: Running pre-create checks...
	I0319 20:25:20.034939   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .PreCreateCheck
	I0319 20:25:20.035232   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:25:20.035646   55982 main.go:141] libmachine: Creating machine...
	I0319 20:25:20.035664   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .Create
	I0319 20:25:20.035810   55982 main.go:141] libmachine: (old-k8s-version-159022) Creating KVM machine...
	I0319 20:25:20.036933   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found existing default KVM network
	I0319 20:25:20.037991   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.037847   56546 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:20:18} reservation:<nil>}
	I0319 20:25:20.038687   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.038572   56546 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:58:8d} reservation:<nil>}
	I0319 20:25:20.039469   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.039386   56546 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ca4e0}
	I0319 20:25:20.039509   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | created network xml: 
	I0319 20:25:20.039532   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | <network>
	I0319 20:25:20.039545   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   <name>mk-old-k8s-version-159022</name>
	I0319 20:25:20.039562   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   <dns enable='no'/>
	I0319 20:25:20.039575   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   
	I0319 20:25:20.039585   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0319 20:25:20.039598   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |     <dhcp>
	I0319 20:25:20.039616   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0319 20:25:20.039626   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |     </dhcp>
	I0319 20:25:20.039633   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   </ip>
	I0319 20:25:20.039641   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG |   
	I0319 20:25:20.039648   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | </network>
	I0319 20:25:20.039658   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | 
	I0319 20:25:20.044896   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | trying to create private KVM network mk-old-k8s-version-159022 192.168.61.0/24...
	I0319 20:25:20.113584   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | private KVM network mk-old-k8s-version-159022 192.168.61.0/24 created
	I0319 20:25:20.113614   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022 ...
	I0319 20:25:20.113628   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.113550   56546 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:20.113655   55982 main.go:141] libmachine: (old-k8s-version-159022) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 20:25:20.113670   55982 main.go:141] libmachine: (old-k8s-version-159022) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 20:25:20.346103   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.345974   56546 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa...
	I0319 20:25:20.422890   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.422761   56546 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/old-k8s-version-159022.rawdisk...
	I0319 20:25:20.422926   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Writing magic tar header
	I0319 20:25:20.422944   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Writing SSH key tar header
	I0319 20:25:20.423104   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:20.422988   56546 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022 ...
	I0319 20:25:20.423169   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022 (perms=drwx------)
	I0319 20:25:20.423184   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022
	I0319 20:25:20.423201   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 20:25:20.423217   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 20:25:20.423230   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 20:25:20.423236   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 20:25:20.423247   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 20:25:20.423260   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:25:20.423274   55982 main.go:141] libmachine: (old-k8s-version-159022) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 20:25:20.423291   55982 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:25:20.423307   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 20:25:20.423316   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 20:25:20.423322   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home/jenkins
	I0319 20:25:20.423333   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Checking permissions on dir: /home
	I0319 20:25:20.423345   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Skipping /home - not owner
	I0319 20:25:20.424469   55982 main.go:141] libmachine: (old-k8s-version-159022) define libvirt domain using xml: 
	I0319 20:25:20.424484   55982 main.go:141] libmachine: (old-k8s-version-159022) <domain type='kvm'>
	I0319 20:25:20.424492   55982 main.go:141] libmachine: (old-k8s-version-159022)   <name>old-k8s-version-159022</name>
	I0319 20:25:20.424500   55982 main.go:141] libmachine: (old-k8s-version-159022)   <memory unit='MiB'>2200</memory>
	I0319 20:25:20.424505   55982 main.go:141] libmachine: (old-k8s-version-159022)   <vcpu>2</vcpu>
	I0319 20:25:20.424510   55982 main.go:141] libmachine: (old-k8s-version-159022)   <features>
	I0319 20:25:20.424517   55982 main.go:141] libmachine: (old-k8s-version-159022)     <acpi/>
	I0319 20:25:20.424536   55982 main.go:141] libmachine: (old-k8s-version-159022)     <apic/>
	I0319 20:25:20.424549   55982 main.go:141] libmachine: (old-k8s-version-159022)     <pae/>
	I0319 20:25:20.424569   55982 main.go:141] libmachine: (old-k8s-version-159022)     
	I0319 20:25:20.424581   55982 main.go:141] libmachine: (old-k8s-version-159022)   </features>
	I0319 20:25:20.424596   55982 main.go:141] libmachine: (old-k8s-version-159022)   <cpu mode='host-passthrough'>
	I0319 20:25:20.424607   55982 main.go:141] libmachine: (old-k8s-version-159022)   
	I0319 20:25:20.424614   55982 main.go:141] libmachine: (old-k8s-version-159022)   </cpu>
	I0319 20:25:20.424625   55982 main.go:141] libmachine: (old-k8s-version-159022)   <os>
	I0319 20:25:20.424635   55982 main.go:141] libmachine: (old-k8s-version-159022)     <type>hvm</type>
	I0319 20:25:20.424661   55982 main.go:141] libmachine: (old-k8s-version-159022)     <boot dev='cdrom'/>
	I0319 20:25:20.424684   55982 main.go:141] libmachine: (old-k8s-version-159022)     <boot dev='hd'/>
	I0319 20:25:20.424706   55982 main.go:141] libmachine: (old-k8s-version-159022)     <bootmenu enable='no'/>
	I0319 20:25:20.424721   55982 main.go:141] libmachine: (old-k8s-version-159022)   </os>
	I0319 20:25:20.424734   55982 main.go:141] libmachine: (old-k8s-version-159022)   <devices>
	I0319 20:25:20.424742   55982 main.go:141] libmachine: (old-k8s-version-159022)     <disk type='file' device='cdrom'>
	I0319 20:25:20.424759   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/boot2docker.iso'/>
	I0319 20:25:20.424778   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target dev='hdc' bus='scsi'/>
	I0319 20:25:20.424788   55982 main.go:141] libmachine: (old-k8s-version-159022)       <readonly/>
	I0319 20:25:20.424797   55982 main.go:141] libmachine: (old-k8s-version-159022)     </disk>
	I0319 20:25:20.424807   55982 main.go:141] libmachine: (old-k8s-version-159022)     <disk type='file' device='disk'>
	I0319 20:25:20.424824   55982 main.go:141] libmachine: (old-k8s-version-159022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 20:25:20.424842   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/old-k8s-version-159022.rawdisk'/>
	I0319 20:25:20.424853   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target dev='hda' bus='virtio'/>
	I0319 20:25:20.424869   55982 main.go:141] libmachine: (old-k8s-version-159022)     </disk>
	I0319 20:25:20.424881   55982 main.go:141] libmachine: (old-k8s-version-159022)     <interface type='network'>
	I0319 20:25:20.424892   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source network='mk-old-k8s-version-159022'/>
	I0319 20:25:20.424907   55982 main.go:141] libmachine: (old-k8s-version-159022)       <model type='virtio'/>
	I0319 20:25:20.424919   55982 main.go:141] libmachine: (old-k8s-version-159022)     </interface>
	I0319 20:25:20.424926   55982 main.go:141] libmachine: (old-k8s-version-159022)     <interface type='network'>
	I0319 20:25:20.424947   55982 main.go:141] libmachine: (old-k8s-version-159022)       <source network='default'/>
	I0319 20:25:20.424967   55982 main.go:141] libmachine: (old-k8s-version-159022)       <model type='virtio'/>
	I0319 20:25:20.424977   55982 main.go:141] libmachine: (old-k8s-version-159022)     </interface>
	I0319 20:25:20.424984   55982 main.go:141] libmachine: (old-k8s-version-159022)     <serial type='pty'>
	I0319 20:25:20.425000   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target port='0'/>
	I0319 20:25:20.425011   55982 main.go:141] libmachine: (old-k8s-version-159022)     </serial>
	I0319 20:25:20.425021   55982 main.go:141] libmachine: (old-k8s-version-159022)     <console type='pty'>
	I0319 20:25:20.425033   55982 main.go:141] libmachine: (old-k8s-version-159022)       <target type='serial' port='0'/>
	I0319 20:25:20.425044   55982 main.go:141] libmachine: (old-k8s-version-159022)     </console>
	I0319 20:25:20.425054   55982 main.go:141] libmachine: (old-k8s-version-159022)     <rng model='virtio'>
	I0319 20:25:20.425074   55982 main.go:141] libmachine: (old-k8s-version-159022)       <backend model='random'>/dev/random</backend>
	I0319 20:25:20.425088   55982 main.go:141] libmachine: (old-k8s-version-159022)     </rng>
	I0319 20:25:20.425096   55982 main.go:141] libmachine: (old-k8s-version-159022)     
	I0319 20:25:20.425114   55982 main.go:141] libmachine: (old-k8s-version-159022)     
	I0319 20:25:20.425127   55982 main.go:141] libmachine: (old-k8s-version-159022)   </devices>
	I0319 20:25:20.425137   55982 main.go:141] libmachine: (old-k8s-version-159022) </domain>
	I0319 20:25:20.425146   55982 main.go:141] libmachine: (old-k8s-version-159022) 
	I0319 20:25:20.429121   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:50:b9:64 in network default
	I0319 20:25:20.429776   55982 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:25:20.429803   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:20.430475   55982 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:25:20.430840   55982 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:25:20.431352   55982 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:25:20.432067   55982 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:25:21.692285   55982 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:25:21.693038   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:21.693483   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:21.693531   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:21.693478   56546 retry.go:31] will retry after 300.974755ms: waiting for machine to come up
	I0319 20:25:21.996216   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:21.996815   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:21.996846   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:21.996757   56546 retry.go:31] will retry after 378.350693ms: waiting for machine to come up
	I0319 20:25:22.377082   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:22.378062   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:22.378093   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:22.378011   56546 retry.go:31] will retry after 337.090678ms: waiting for machine to come up
	I0319 20:25:22.716618   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:22.717084   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:22.717108   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:22.717034   56546 retry.go:31] will retry after 448.487874ms: waiting for machine to come up
	I0319 20:25:23.167271   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:23.167958   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:23.167989   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:23.167909   56546 retry.go:31] will retry after 738.736662ms: waiting for machine to come up
	I0319 20:25:23.907682   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:23.908392   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:23.908420   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:23.908359   56546 retry.go:31] will retry after 823.841957ms: waiting for machine to come up
	I0319 20:25:24.734060   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:24.734560   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:24.734588   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:24.734505   56546 retry.go:31] will retry after 1.015139108s: waiting for machine to come up
	I0319 20:25:25.751162   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:25.751729   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:25.751760   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:25.751675   56546 retry.go:31] will retry after 901.716648ms: waiting for machine to come up
	I0319 20:25:26.654593   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:26.655089   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:26.655166   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:26.655062   56546 retry.go:31] will retry after 1.819645561s: waiting for machine to come up
	I0319 20:25:28.475818   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:28.476348   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:28.476378   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:28.476308   56546 retry.go:31] will retry after 1.820594289s: waiting for machine to come up
	I0319 20:25:30.298258   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:30.298796   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:30.298831   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:30.298734   56546 retry.go:31] will retry after 2.616805696s: waiting for machine to come up
	I0319 20:25:32.918305   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:32.918846   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:32.918877   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:32.918787   56546 retry.go:31] will retry after 3.429736925s: waiting for machine to come up
	I0319 20:25:36.350814   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:36.351305   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:36.351325   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:36.351269   56546 retry.go:31] will retry after 4.231400763s: waiting for machine to come up
	I0319 20:25:40.584325   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:40.584739   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:25:40.584767   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:25:40.584692   56546 retry.go:31] will retry after 5.452618525s: waiting for machine to come up
	I0319 20:25:46.038847   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.039345   55982 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:25:46.039379   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.039389   55982 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:25:46.039719   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022
	I0319 20:25:46.112140   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:25:46.112168   55982 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:25:46.112182   55982 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:25:46.114965   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.115446   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.115479   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.115614   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:25:46.115639   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:25:46.115668   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:25:46.115689   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:25:46.115704   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:25:46.249138   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:25:46.249424   55982 main.go:141] libmachine: (old-k8s-version-159022) KVM machine creation complete!
	I0319 20:25:46.249807   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:25:46.250468   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:46.250692   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:46.250880   55982 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:25:46.250898   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:25:46.252136   55982 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:25:46.252148   55982 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:25:46.252153   55982 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:25:46.252159   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.254582   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.255005   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.255035   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.255178   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.255401   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.255543   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.255708   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.255914   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.256152   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.256167   55982 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:25:46.372010   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:25:46.372034   55982 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:25:46.372044   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.374984   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.375360   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.375389   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.375524   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.375718   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.375883   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.376015   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.376152   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.376351   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.376367   55982 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:25:46.494038   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:25:46.494105   55982 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:25:46.494118   55982 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:25:46.494132   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:46.494390   55982 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:25:46.494419   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:46.494663   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.497351   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.497668   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.497697   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.497804   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.497978   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.498135   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.498270   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.498444   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.498603   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.498615   55982 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:25:46.628943   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:25:46.628978   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.631797   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.632186   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.632209   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.632454   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.632664   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.632822   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.633003   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.633200   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:46.633371   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:46.633387   55982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:25:46.761522   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:25:46.761556   55982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:25:46.761592   55982 buildroot.go:174] setting up certificates
	I0319 20:25:46.761606   55982 provision.go:84] configureAuth start
	I0319 20:25:46.761617   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:25:46.761884   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:46.764704   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.765058   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.765089   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.765274   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.767545   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.767887   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.767923   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.768048   55982 provision.go:143] copyHostCerts
	I0319 20:25:46.768113   55982 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:25:46.768127   55982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:25:46.768208   55982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:25:46.768347   55982 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:25:46.768364   55982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:25:46.768397   55982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:25:46.768513   55982 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:25:46.768524   55982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:25:46.768553   55982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:25:46.768642   55982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:25:46.906018   55982 provision.go:177] copyRemoteCerts
	I0319 20:25:46.906097   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:25:46.906126   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:46.908825   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.909301   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:46.909328   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:46.909399   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:46.909611   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:46.909795   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:46.909933   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.001445   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:25:47.032497   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:25:47.062999   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:25:47.089974   55982 provision.go:87] duration metric: took 328.354781ms to configureAuth
	I0319 20:25:47.090004   55982 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:25:47.090215   55982 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:25:47.090290   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.092919   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.093301   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.093329   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.093482   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.093671   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.093834   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.094009   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.094188   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:47.094361   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:47.094389   55982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:25:47.389752   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:25:47.389778   55982 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:25:47.389787   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetURL
	I0319 20:25:47.391023   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using libvirt version 6000000
	I0319 20:25:47.393502   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.393873   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.393918   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.394056   55982 main.go:141] libmachine: Docker is up and running!
	I0319 20:25:47.394073   55982 main.go:141] libmachine: Reticulating splines...
	I0319 20:25:47.394081   55982 client.go:171] duration metric: took 27.359310019s to LocalClient.Create
	I0319 20:25:47.394110   55982 start.go:167] duration metric: took 27.359377921s to libmachine.API.Create "old-k8s-version-159022"
	I0319 20:25:47.394120   55982 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:25:47.394131   55982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:25:47.394146   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.394369   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:25:47.394393   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.396807   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.397109   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.397143   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.397295   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.397448   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.397599   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.397785   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.490331   55982 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:25:47.495270   55982 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:25:47.495296   55982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:25:47.495364   55982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:25:47.495431   55982 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:25:47.495517   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:25:47.508162   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:25:47.536042   55982 start.go:296] duration metric: took 141.908819ms for postStartSetup
	I0319 20:25:47.536094   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:25:47.536673   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:47.539443   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.539769   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.539793   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.540012   55982 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:25:47.540247   55982 start.go:128] duration metric: took 27.526518248s to createHost
	I0319 20:25:47.540294   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.542495   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.542838   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.542860   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.543017   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.543167   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.543310   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.543428   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.543574   55982 main.go:141] libmachine: Using SSH client type: native
	I0319 20:25:47.543732   55982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:25:47.543745   55982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0319 20:25:47.661671   55982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710879947.645733085
	
	I0319 20:25:47.661695   55982 fix.go:216] guest clock: 1710879947.645733085
	I0319 20:25:47.661702   55982 fix.go:229] Guest: 2024-03-19 20:25:47.645733085 +0000 UTC Remote: 2024-03-19 20:25:47.540279791 +0000 UTC m=+61.265095128 (delta=105.453294ms)
	I0319 20:25:47.661719   55982 fix.go:200] guest clock delta is within tolerance: 105.453294ms
	I0319 20:25:47.661723   55982 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 27.648182773s
	I0319 20:25:47.661747   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.662043   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:47.665318   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.665745   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.665772   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.665924   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.666372   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.666626   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:25:47.666722   55982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:25:47.666765   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.666857   55982 ssh_runner.go:195] Run: cat /version.json
	I0319 20:25:47.666887   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:25:47.669670   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.669921   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.670174   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.670205   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.670519   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.670533   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:47.670614   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:47.670739   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.670765   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:25:47.670897   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.670944   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:25:47.671104   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.671131   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:25:47.671276   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:25:47.762533   55982 ssh_runner.go:195] Run: systemctl --version
	I0319 20:25:47.788473   55982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:25:47.954051   55982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:25:47.962242   55982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:25:47.962319   55982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:25:47.982103   55982 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:25:47.982145   55982 start.go:494] detecting cgroup driver to use...
	I0319 20:25:47.982217   55982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:25:48.003993   55982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:25:48.021105   55982 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:25:48.021166   55982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:25:48.040411   55982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:25:48.058121   55982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:25:48.212569   55982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:25:48.390385   55982 docker.go:233] disabling docker service ...
	I0319 20:25:48.390453   55982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:25:48.408015   55982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:25:48.424140   55982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:25:48.566359   55982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:25:48.725824   55982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:25:48.754844   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:25:48.782110   55982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:25:48.782179   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.795927   55982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:25:48.795985   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.807923   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.819776   55982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:25:48.831705   55982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:25:48.845768   55982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:25:48.862637   55982 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:25:48.862690   55982 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:25:48.880204   55982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:25:48.894624   55982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:49.054962   55982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:25:49.214429   55982 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:25:49.214492   55982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:25:49.219878   55982 start.go:562] Will wait 60s for crictl version
	I0319 20:25:49.219932   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:49.224417   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:25:49.275717   55982 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:25:49.275794   55982 ssh_runner.go:195] Run: crio --version
	I0319 20:25:49.313888   55982 ssh_runner.go:195] Run: crio --version
	I0319 20:25:49.358265   55982 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:25:49.359618   55982 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:25:49.362790   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:49.363262   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:25:36 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:25:49.363293   55982 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:25:49.363591   55982 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:25:49.369067   55982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:25:49.385009   55982 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:25:49.385178   55982 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:25:49.385239   55982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:25:49.435514   55982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:25:49.435615   55982 ssh_runner.go:195] Run: which lz4
	I0319 20:25:49.441041   55982 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0319 20:25:49.446341   55982 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:25:49.446371   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:25:51.708334   55982 crio.go:462] duration metric: took 2.267338917s to copy over tarball
	I0319 20:25:51.708422   55982 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:25:54.900870   55982 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.192420858s)
	I0319 20:25:54.900899   55982 crio.go:469] duration metric: took 3.192525975s to extract the tarball
	I0319 20:25:54.900908   55982 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:25:54.945654   55982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:25:54.997150   55982 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:25:54.997184   55982 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:25:54.997296   55982 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:54.997579   55982 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:25:54.997591   55982 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:54.997620   55982 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:54.997723   55982 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:54.997743   55982 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:54.997837   55982 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:25:54.997728   55982 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:54.998942   55982 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:54.999189   55982 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:54.999209   55982 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:54.999212   55982 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:54.999217   55982 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:25:54.999298   55982 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:54.999308   55982 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:54.999788   55982 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.154901   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:55.178995   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.194425   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:55.207142   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:25:55.213949   55982 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:25:55.213982   55982 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:55.214022   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.215693   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:55.249431   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:55.295990   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:55.300329   55982 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:25:55.300435   55982 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.300501   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.300329   55982 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:25:55.300546   55982 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:55.300605   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.346880   55982 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:25:55.346948   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:25:55.346976   55982 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:25:55.347016   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.369140   55982 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:25:55.369184   55982 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:55.369244   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.406307   55982 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:25:55.406354   55982 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:55.406381   55982 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:25:55.406406   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:25:55.406422   55982 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:55.406467   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.406483   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:25:55.406411   55982 ssh_runner.go:195] Run: which crictl
	I0319 20:25:55.437222   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:25:55.437309   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:25:55.437341   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:25:55.517473   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:25:55.517517   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:25:55.517577   55982 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:25:55.517723   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:25:55.553853   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:25:55.561362   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:25:55.598715   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:25:55.598735   55982 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:25:55.931464   55982 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:25:56.081504   55982 cache_images.go:92] duration metric: took 1.084303153s to LoadCachedImages
	W0319 20:25:56.081622   55982 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0319 20:25:56.081643   55982 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:25:56.081776   55982 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:25:56.081863   55982 ssh_runner.go:195] Run: crio config
	I0319 20:25:56.139415   55982 cni.go:84] Creating CNI manager for ""
	I0319 20:25:56.139438   55982 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:25:56.139450   55982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:25:56.139467   55982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:25:56.139666   55982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:25:56.139756   55982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:25:56.151535   55982 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:25:56.151615   55982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:25:56.162688   55982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:25:56.183525   55982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:25:56.202662   55982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:25:56.222091   55982 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:25:56.226617   55982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:25:56.240650   55982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:25:56.387390   55982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:25:56.579447   55982 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:25:56.579475   55982 certs.go:194] generating shared ca certs ...
	I0319 20:25:56.579495   55982 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.579653   55982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:25:56.579726   55982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:25:56.579755   55982 certs.go:256] generating profile certs ...
	I0319 20:25:56.579866   55982 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:25:56.579886   55982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt with IP's: []
	I0319 20:25:56.671840   55982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt ...
	I0319 20:25:56.671869   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: {Name:mk61c72fce9c679651e7a9e1decdd5e5de4586de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.672041   55982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key ...
	I0319 20:25:56.672059   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key: {Name:mk4f4dae15ac74583de7977e4327a5ef8cb539c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.672176   55982 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:25:56.672196   55982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.28]
	I0319 20:25:56.842042   55982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4 ...
	I0319 20:25:56.842071   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4: {Name:mkca57b1829959d481c3f86e2a09caa6cc12fe28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.842247   55982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4 ...
	I0319 20:25:56.842263   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4: {Name:mk2b34b2cf48017b7663a3ecf0d75ff3f102a0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:56.842355   55982 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt.d78c40b4 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt
	I0319 20:25:56.842468   55982 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4 -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key
	I0319 20:25:56.842550   55982 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:25:56.842576   55982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt with IP's: []
	I0319 20:25:57.176440   55982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt ...
	I0319 20:25:57.176468   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt: {Name:mk6cfea44b706848d0ef5c66bf794a61fff6c263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:57.176654   55982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key ...
	I0319 20:25:57.176673   55982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key: {Name:mkac062ea72fce040eaa4b5cd499c8a1bc2a3b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:25:57.176863   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:25:57.176914   55982 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:25:57.176924   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:25:57.176943   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:25:57.176967   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:25:57.176990   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:25:57.177026   55982 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:25:57.177598   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:25:57.214307   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:25:57.242380   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:25:57.270810   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:25:57.298840   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:25:57.327424   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:25:57.354944   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:25:57.383921   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:25:57.415824   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:25:57.446943   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:25:57.498524   55982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:25:57.524299   55982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:25:57.548071   55982 ssh_runner.go:195] Run: openssl version
	I0319 20:25:57.555174   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:25:57.568658   55982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:25:57.573925   55982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:25:57.573975   55982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:25:57.580585   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:25:57.593587   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:25:57.606796   55982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:25:57.612138   55982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:25:57.612195   55982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:25:57.619084   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:25:57.632991   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:25:57.645667   55982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:57.650822   55982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:57.650871   55982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:25:57.657697   55982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:25:57.671652   55982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:25:57.678066   55982 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 20:25:57.678131   55982 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:25:57.678227   55982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:25:57.678281   55982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:25:57.719280   55982 cri.go:89] found id: ""
	I0319 20:25:57.719367   55982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 20:25:57.731073   55982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:25:57.742226   55982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:25:57.755538   55982 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:25:57.755557   55982 kubeadm.go:156] found existing configuration files:
	
	I0319 20:25:57.755603   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:25:57.767830   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:25:57.767899   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:25:57.778642   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:25:57.790544   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:25:57.790619   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:25:57.802730   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:25:57.813705   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:25:57.813756   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:25:57.825964   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:25:57.837099   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:25:57.837150   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:25:57.848351   55982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:25:57.987481   55982 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:25:57.987556   55982 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:25:58.185352   55982 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:25:58.185497   55982 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:25:58.185618   55982 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:25:58.461759   55982 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:25:58.464609   55982 out.go:204]   - Generating certificates and keys ...
	I0319 20:25:58.464729   55982 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:25:58.464818   55982 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:25:58.819523   55982 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 20:25:59.010342   55982 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 20:25:59.289340   55982 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 20:25:59.481304   55982 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 20:25:59.772779   55982 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 20:25:59.772969   55982 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	I0319 20:26:00.302141   55982 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 20:26:00.302338   55982 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	I0319 20:26:00.676075   55982 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 20:26:00.924844   55982 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 20:26:01.100878   55982 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 20:26:01.100971   55982 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:26:01.215620   55982 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:26:01.525378   55982 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:26:01.790567   55982 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:26:02.183004   55982 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:26:02.199227   55982 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:26:02.200349   55982 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:26:02.200402   55982 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:26:02.346678   55982 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:26:02.348574   55982 out.go:204]   - Booting up control plane ...
	I0319 20:26:02.348716   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:26:02.356184   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:26:02.357288   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:26:02.358196   55982 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:26:02.362557   55982 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:26:42.361450   55982 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:26:42.361557   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:26:42.361770   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:26:47.362533   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:26:47.362763   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:26:57.363368   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:26:57.363647   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:27:17.364761   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:27:17.365042   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:27:57.365154   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:27:57.365686   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:27:57.365702   55982 kubeadm.go:309] 
	I0319 20:27:57.365795   55982 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:27:57.365886   55982 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:27:57.365897   55982 kubeadm.go:309] 
	I0319 20:27:57.365971   55982 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:27:57.366073   55982 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:27:57.366246   55982 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:27:57.366261   55982 kubeadm.go:309] 
	I0319 20:27:57.366399   55982 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:27:57.366459   55982 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:27:57.366503   55982 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:27:57.366511   55982 kubeadm.go:309] 
	I0319 20:27:57.366664   55982 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:27:57.366775   55982 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:27:57.366783   55982 kubeadm.go:309] 
	I0319 20:27:57.366922   55982 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:27:57.367042   55982 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:27:57.367191   55982 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:27:57.367362   55982 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:27:57.367403   55982 kubeadm.go:309] 
	I0319 20:27:57.367633   55982 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:27:57.367824   55982 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:27:57.368029   55982 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:27:57.368202   55982 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-159022] and IPs [192.168.61.28 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:27:57.368250   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:28:00.047920   55982 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.67964074s)
	I0319 20:28:00.048003   55982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:28:00.064905   55982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:28:00.077445   55982 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:28:00.077469   55982 kubeadm.go:156] found existing configuration files:
	
	I0319 20:28:00.077520   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:28:00.089987   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:28:00.090035   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:28:00.101687   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:28:00.111634   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:28:00.111678   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:28:00.123195   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:28:00.134315   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:28:00.134375   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:28:00.145918   55982 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:28:00.156950   55982 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:28:00.157012   55982 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:28:00.167775   55982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:28:00.411165   55982 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:29:56.819531   55982 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:29:56.819659   55982 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:29:56.821663   55982 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:29:56.821758   55982 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:29:56.821886   55982 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:29:56.822011   55982 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:29:56.822097   55982 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:29:56.822150   55982 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:29:56.824019   55982 out.go:204]   - Generating certificates and keys ...
	I0319 20:29:56.824086   55982 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:29:56.824139   55982 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:29:56.824217   55982 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:29:56.824322   55982 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:29:56.824432   55982 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:29:56.824525   55982 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:29:56.824622   55982 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:29:56.824697   55982 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:29:56.824801   55982 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:29:56.824933   55982 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:29:56.824992   55982 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:29:56.825047   55982 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:29:56.825094   55982 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:29:56.825139   55982 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:29:56.825207   55982 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:29:56.825259   55982 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:29:56.825368   55982 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:29:56.825465   55982 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:29:56.825517   55982 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:29:56.825616   55982 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:29:56.827311   55982 out.go:204]   - Booting up control plane ...
	I0319 20:29:56.827412   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:29:56.827498   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:29:56.827584   55982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:29:56.827666   55982 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:29:56.827869   55982 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:29:56.827937   55982 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:29:56.828020   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:29:56.828203   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:29:56.828321   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:29:56.828530   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:29:56.828612   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:29:56.828876   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:29:56.828952   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:29:56.829121   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:29:56.829221   55982 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:29:56.829401   55982 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:29:56.829410   55982 kubeadm.go:309] 
	I0319 20:29:56.829463   55982 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:29:56.829519   55982 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:29:56.829530   55982 kubeadm.go:309] 
	I0319 20:29:56.829582   55982 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:29:56.829633   55982 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:29:56.829786   55982 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:29:56.829797   55982 kubeadm.go:309] 
	I0319 20:29:56.829947   55982 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:29:56.830007   55982 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:29:56.830054   55982 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:29:56.830064   55982 kubeadm.go:309] 
	I0319 20:29:56.830224   55982 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:29:56.830362   55982 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:29:56.830372   55982 kubeadm.go:309] 
	I0319 20:29:56.830458   55982 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:29:56.830558   55982 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:29:56.830652   55982 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:29:56.830715   55982 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:29:56.830741   55982 kubeadm.go:309] 
	I0319 20:29:56.830777   55982 kubeadm.go:393] duration metric: took 3m59.15265081s to StartCluster
	I0319 20:29:56.830816   55982 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:29:56.830860   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:29:56.885490   55982 cri.go:89] found id: ""
	I0319 20:29:56.885521   55982 logs.go:276] 0 containers: []
	W0319 20:29:56.885533   55982 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:29:56.885540   55982 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:29:56.885599   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:29:56.923850   55982 cri.go:89] found id: ""
	I0319 20:29:56.923879   55982 logs.go:276] 0 containers: []
	W0319 20:29:56.923889   55982 logs.go:278] No container was found matching "etcd"
	I0319 20:29:56.923896   55982 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:29:56.923954   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:29:56.970642   55982 cri.go:89] found id: ""
	I0319 20:29:56.970669   55982 logs.go:276] 0 containers: []
	W0319 20:29:56.970679   55982 logs.go:278] No container was found matching "coredns"
	I0319 20:29:56.970687   55982 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:29:56.970748   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:29:57.009555   55982 cri.go:89] found id: ""
	I0319 20:29:57.009587   55982 logs.go:276] 0 containers: []
	W0319 20:29:57.009597   55982 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:29:57.009605   55982 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:29:57.009666   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:29:57.046461   55982 cri.go:89] found id: ""
	I0319 20:29:57.046493   55982 logs.go:276] 0 containers: []
	W0319 20:29:57.046504   55982 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:29:57.046511   55982 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:29:57.046579   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:29:57.084091   55982 cri.go:89] found id: ""
	I0319 20:29:57.084118   55982 logs.go:276] 0 containers: []
	W0319 20:29:57.084129   55982 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:29:57.084136   55982 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:29:57.084195   55982 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:29:57.122334   55982 cri.go:89] found id: ""
	I0319 20:29:57.122365   55982 logs.go:276] 0 containers: []
	W0319 20:29:57.122377   55982 logs.go:278] No container was found matching "kindnet"
	I0319 20:29:57.122387   55982 logs.go:123] Gathering logs for container status ...
	I0319 20:29:57.122402   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:29:57.165894   55982 logs.go:123] Gathering logs for kubelet ...
	I0319 20:29:57.165929   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:29:57.231122   55982 logs.go:123] Gathering logs for dmesg ...
	I0319 20:29:57.231157   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:29:57.247995   55982 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:29:57.248037   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:29:57.386567   55982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:29:57.386598   55982 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:29:57.386614   55982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0319 20:29:57.489187   55982 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:29:57.489237   55982 out.go:239] * 
	* 
	W0319 20:29:57.489297   55982 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:29:57.489356   55982 out.go:239] * 
	* 
	W0319 20:29:57.490139   55982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:29:57.493459   55982 out.go:177] 
	W0319 20:29:57.494907   55982 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:29:57.494960   55982 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:29:57.494987   55982 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:29:57.496766   55982 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-159022 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 6 (256.332072ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:29:57.801509   58734 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159022" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (311.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-414130 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-414130 --alsologtostderr -v=3: exit status 82 (2m0.532356768s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-414130"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:27:45.264851   57846 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:27:45.265001   57846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:27:45.265012   57846 out.go:304] Setting ErrFile to fd 2...
	I0319 20:27:45.265018   57846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:27:45.265226   57846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:27:45.265495   57846 out.go:298] Setting JSON to false
	I0319 20:27:45.265585   57846 mustload.go:65] Loading cluster: no-preload-414130
	I0319 20:27:45.265945   57846 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:27:45.266020   57846 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:27:45.266211   57846 mustload.go:65] Loading cluster: no-preload-414130
	I0319 20:27:45.266336   57846 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:27:45.266389   57846 stop.go:39] StopHost: no-preload-414130
	I0319 20:27:45.266909   57846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:27:45.266980   57846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:27:45.282850   57846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0319 20:27:45.283362   57846 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:27:45.284003   57846 main.go:141] libmachine: Using API Version  1
	I0319 20:27:45.284025   57846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:27:45.284482   57846 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:27:45.286995   57846 out.go:177] * Stopping node "no-preload-414130"  ...
	I0319 20:27:45.288926   57846 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 20:27:45.288960   57846 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:27:45.289172   57846 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 20:27:45.289194   57846 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:27:45.292334   57846 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:27:45.292810   57846 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:26:04 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:27:45.292833   57846 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:27:45.293031   57846 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:27:45.293200   57846 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:27:45.293395   57846 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:27:45.293599   57846 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:27:45.415831   57846 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 20:27:45.473280   57846 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 20:27:45.540438   57846 main.go:141] libmachine: Stopping "no-preload-414130"...
	I0319 20:27:45.540462   57846 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:27:45.542246   57846 main.go:141] libmachine: (no-preload-414130) Calling .Stop
	I0319 20:27:45.545896   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 0/120
	I0319 20:27:46.547461   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 1/120
	I0319 20:27:47.548837   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 2/120
	I0319 20:27:48.550755   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 3/120
	I0319 20:27:49.552210   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 4/120
	I0319 20:27:50.553589   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 5/120
	I0319 20:27:51.555271   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 6/120
	I0319 20:27:52.557394   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 7/120
	I0319 20:27:53.559098   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 8/120
	I0319 20:27:54.560346   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 9/120
	I0319 20:27:55.561713   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 10/120
	I0319 20:27:56.563303   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 11/120
	I0319 20:27:57.564864   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 12/120
	I0319 20:27:58.566301   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 13/120
	I0319 20:27:59.567522   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 14/120
	I0319 20:28:00.569714   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 15/120
	I0319 20:28:01.571078   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 16/120
	I0319 20:28:02.572381   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 17/120
	I0319 20:28:03.573874   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 18/120
	I0319 20:28:04.575168   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 19/120
	I0319 20:28:05.576668   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 20/120
	I0319 20:28:06.577963   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 21/120
	I0319 20:28:07.579179   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 22/120
	I0319 20:28:08.580549   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 23/120
	I0319 20:28:09.582712   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 24/120
	I0319 20:28:10.584008   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 25/120
	I0319 20:28:11.585686   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 26/120
	I0319 20:28:12.587113   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 27/120
	I0319 20:28:13.588758   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 28/120
	I0319 20:28:14.590627   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 29/120
	I0319 20:28:15.592636   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 30/120
	I0319 20:28:16.594660   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 31/120
	I0319 20:28:17.595954   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 32/120
	I0319 20:28:18.597921   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 33/120
	I0319 20:28:19.599158   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 34/120
	I0319 20:28:20.601269   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 35/120
	I0319 20:28:21.603010   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 36/120
	I0319 20:28:22.604581   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 37/120
	I0319 20:28:23.606738   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 38/120
	I0319 20:28:24.608105   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 39/120
	I0319 20:28:25.609605   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 40/120
	I0319 20:28:26.610986   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 41/120
	I0319 20:28:27.612735   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 42/120
	I0319 20:28:28.614575   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 43/120
	I0319 20:28:29.616170   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 44/120
	I0319 20:28:30.618115   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 45/120
	I0319 20:28:31.619625   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 46/120
	I0319 20:28:32.620943   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 47/120
	I0319 20:28:33.622187   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 48/120
	I0319 20:28:34.623456   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 49/120
	I0319 20:28:35.625715   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 50/120
	I0319 20:28:36.627000   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 51/120
	I0319 20:28:37.628403   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 52/120
	I0319 20:28:38.629822   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 53/120
	I0319 20:28:39.631206   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 54/120
	I0319 20:28:40.633090   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 55/120
	I0319 20:28:41.634255   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 56/120
	I0319 20:28:42.635392   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 57/120
	I0319 20:28:43.636740   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 58/120
	I0319 20:28:44.637909   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 59/120
	I0319 20:28:45.639793   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 60/120
	I0319 20:28:46.641102   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 61/120
	I0319 20:28:47.642512   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 62/120
	I0319 20:28:48.644327   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 63/120
	I0319 20:28:49.645699   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 64/120
	I0319 20:28:50.647024   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 65/120
	I0319 20:28:51.648454   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 66/120
	I0319 20:28:52.650559   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 67/120
	I0319 20:28:53.651972   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 68/120
	I0319 20:28:54.653215   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 69/120
	I0319 20:28:55.655370   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 70/120
	I0319 20:28:56.656687   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 71/120
	I0319 20:28:57.658582   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 72/120
	I0319 20:28:58.659816   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 73/120
	I0319 20:28:59.661886   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 74/120
	I0319 20:29:00.663256   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 75/120
	I0319 20:29:01.664581   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 76/120
	I0319 20:29:02.665751   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 77/120
	I0319 20:29:03.667194   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 78/120
	I0319 20:29:04.668336   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 79/120
	I0319 20:29:05.670301   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 80/120
	I0319 20:29:06.671592   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 81/120
	I0319 20:29:07.672688   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 82/120
	I0319 20:29:08.674637   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 83/120
	I0319 20:29:09.675784   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 84/120
	I0319 20:29:10.677418   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 85/120
	I0319 20:29:11.679103   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 86/120
	I0319 20:29:12.680948   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 87/120
	I0319 20:29:13.682407   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 88/120
	I0319 20:29:14.683770   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 89/120
	I0319 20:29:15.686170   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 90/120
	I0319 20:29:16.687844   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 91/120
	I0319 20:29:17.689141   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 92/120
	I0319 20:29:18.690546   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 93/120
	I0319 20:29:19.691853   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 94/120
	I0319 20:29:20.693336   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 95/120
	I0319 20:29:21.694787   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 96/120
	I0319 20:29:22.696127   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 97/120
	I0319 20:29:23.697519   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 98/120
	I0319 20:29:24.699011   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 99/120
	I0319 20:29:25.701108   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 100/120
	I0319 20:29:26.703115   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 101/120
	I0319 20:29:27.704459   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 102/120
	I0319 20:29:28.706620   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 103/120
	I0319 20:29:29.707839   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 104/120
	I0319 20:29:30.710024   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 105/120
	I0319 20:29:31.711557   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 106/120
	I0319 20:29:32.713138   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 107/120
	I0319 20:29:33.714573   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 108/120
	I0319 20:29:34.716036   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 109/120
	I0319 20:29:35.718224   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 110/120
	I0319 20:29:36.719341   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 111/120
	I0319 20:29:37.720826   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 112/120
	I0319 20:29:38.722271   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 113/120
	I0319 20:29:39.723703   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 114/120
	I0319 20:29:40.725652   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 115/120
	I0319 20:29:41.726966   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 116/120
	I0319 20:29:42.728390   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 117/120
	I0319 20:29:43.729757   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 118/120
	I0319 20:29:44.731348   57846 main.go:141] libmachine: (no-preload-414130) Waiting for machine to stop 119/120
	I0319 20:29:45.732153   57846 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0319 20:29:45.732203   57846 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0319 20:29:45.734496   57846 out.go:177] 
	W0319 20:29:45.736207   57846 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0319 20:29:45.736232   57846 out.go:239] * 
	* 
	W0319 20:29:45.738869   57846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:29:45.740371   57846 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-414130 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130: exit status 3 (18.470414867s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:30:04.212551   58659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host
	E0319 20:30:04.212573   58659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-414130" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-421660 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-421660 --alsologtostderr -v=3: exit status 82 (2m0.54898127s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-421660"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:28:31.497404   58073 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:28:31.497537   58073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:28:31.497553   58073 out.go:304] Setting ErrFile to fd 2...
	I0319 20:28:31.497559   58073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:28:31.497886   58073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:28:31.498209   58073 out.go:298] Setting JSON to false
	I0319 20:28:31.498311   58073 mustload.go:65] Loading cluster: embed-certs-421660
	I0319 20:28:31.498705   58073 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:28:31.498792   58073 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:28:31.498984   58073 mustload.go:65] Loading cluster: embed-certs-421660
	I0319 20:28:31.499089   58073 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:28:31.499114   58073 stop.go:39] StopHost: embed-certs-421660
	I0319 20:28:31.499490   58073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:28:31.499530   58073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:28:31.514061   58073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0319 20:28:31.514492   58073 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:28:31.515173   58073 main.go:141] libmachine: Using API Version  1
	I0319 20:28:31.515207   58073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:28:31.515511   58073 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:28:31.517836   58073 out.go:177] * Stopping node "embed-certs-421660"  ...
	I0319 20:28:31.519546   58073 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 20:28:31.519572   58073 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:28:31.519825   58073 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 20:28:31.519874   58073 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:28:31.522539   58073 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:28:31.522946   58073 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:26:58 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:28:31.522966   58073 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:28:31.523140   58073 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:28:31.523300   58073 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:28:31.523461   58073 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:28:31.523599   58073 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:28:31.648824   58073 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 20:28:31.717351   58073 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 20:28:31.786336   58073 main.go:141] libmachine: Stopping "embed-certs-421660"...
	I0319 20:28:31.786373   58073 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:28:31.788010   58073 main.go:141] libmachine: (embed-certs-421660) Calling .Stop
	I0319 20:28:31.791502   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 0/120
	I0319 20:28:32.792932   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 1/120
	I0319 20:28:33.794261   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 2/120
	I0319 20:28:34.795599   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 3/120
	I0319 20:28:35.796939   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 4/120
	I0319 20:28:36.798775   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 5/120
	I0319 20:28:37.800136   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 6/120
	I0319 20:28:38.801459   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 7/120
	I0319 20:28:39.802763   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 8/120
	I0319 20:28:40.803987   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 9/120
	I0319 20:28:41.806012   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 10/120
	I0319 20:28:42.807286   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 11/120
	I0319 20:28:43.808474   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 12/120
	I0319 20:28:44.809844   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 13/120
	I0319 20:28:45.811334   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 14/120
	I0319 20:28:46.813623   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 15/120
	I0319 20:28:47.815169   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 16/120
	I0319 20:28:48.817219   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 17/120
	I0319 20:28:49.818931   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 18/120
	I0319 20:28:50.820342   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 19/120
	I0319 20:28:51.822114   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 20/120
	I0319 20:28:52.823322   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 21/120
	I0319 20:28:53.824630   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 22/120
	I0319 20:28:54.825733   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 23/120
	I0319 20:28:55.827006   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 24/120
	I0319 20:28:56.828900   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 25/120
	I0319 20:28:57.830165   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 26/120
	I0319 20:28:58.832227   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 27/120
	I0319 20:28:59.833498   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 28/120
	I0319 20:29:00.834854   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 29/120
	I0319 20:29:01.836829   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 30/120
	I0319 20:29:02.838084   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 31/120
	I0319 20:29:03.839457   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 32/120
	I0319 20:29:04.840710   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 33/120
	I0319 20:29:05.842163   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 34/120
	I0319 20:29:06.844089   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 35/120
	I0319 20:29:07.845341   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 36/120
	I0319 20:29:08.846789   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 37/120
	I0319 20:29:09.848100   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 38/120
	I0319 20:29:10.849533   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 39/120
	I0319 20:29:11.851655   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 40/120
	I0319 20:29:12.853909   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 41/120
	I0319 20:29:13.855153   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 42/120
	I0319 20:29:14.856543   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 43/120
	I0319 20:29:15.859042   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 44/120
	I0319 20:29:16.861066   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 45/120
	I0319 20:29:17.862287   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 46/120
	I0319 20:29:18.863755   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 47/120
	I0319 20:29:19.865169   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 48/120
	I0319 20:29:20.866746   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 49/120
	I0319 20:29:21.868912   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 50/120
	I0319 20:29:22.870854   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 51/120
	I0319 20:29:23.872277   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 52/120
	I0319 20:29:24.873644   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 53/120
	I0319 20:29:25.875166   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 54/120
	I0319 20:29:26.877202   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 55/120
	I0319 20:29:27.878588   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 56/120
	I0319 20:29:28.879893   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 57/120
	I0319 20:29:29.881458   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 58/120
	I0319 20:29:30.882953   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 59/120
	I0319 20:29:31.885166   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 60/120
	I0319 20:29:32.886590   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 61/120
	I0319 20:29:33.887958   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 62/120
	I0319 20:29:34.889476   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 63/120
	I0319 20:29:35.890844   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 64/120
	I0319 20:29:36.892886   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 65/120
	I0319 20:29:37.894254   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 66/120
	I0319 20:29:38.895620   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 67/120
	I0319 20:29:39.896999   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 68/120
	I0319 20:29:40.898367   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 69/120
	I0319 20:29:41.900578   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 70/120
	I0319 20:29:42.901952   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 71/120
	I0319 20:29:43.903149   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 72/120
	I0319 20:29:44.904235   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 73/120
	I0319 20:29:45.905905   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 74/120
	I0319 20:29:46.907768   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 75/120
	I0319 20:29:47.909162   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 76/120
	I0319 20:29:48.910507   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 77/120
	I0319 20:29:49.912611   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 78/120
	I0319 20:29:50.914926   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 79/120
	I0319 20:29:51.917024   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 80/120
	I0319 20:29:52.918855   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 81/120
	I0319 20:29:53.920356   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 82/120
	I0319 20:29:54.921741   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 83/120
	I0319 20:29:55.923139   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 84/120
	I0319 20:29:56.925451   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 85/120
	I0319 20:29:57.926842   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 86/120
	I0319 20:29:58.929060   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 87/120
	I0319 20:29:59.930945   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 88/120
	I0319 20:30:00.932481   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 89/120
	I0319 20:30:01.934231   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 90/120
	I0319 20:30:02.936096   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 91/120
	I0319 20:30:03.937328   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 92/120
	I0319 20:30:04.938680   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 93/120
	I0319 20:30:05.939969   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 94/120
	I0319 20:30:06.941783   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 95/120
	I0319 20:30:07.943586   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 96/120
	I0319 20:30:08.944893   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 97/120
	I0319 20:30:09.947266   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 98/120
	I0319 20:30:10.948667   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 99/120
	I0319 20:30:11.950640   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 100/120
	I0319 20:30:12.952043   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 101/120
	I0319 20:30:13.954201   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 102/120
	I0319 20:30:14.956152   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 103/120
	I0319 20:30:15.958031   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 104/120
	I0319 20:30:16.960066   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 105/120
	I0319 20:30:17.962147   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 106/120
	I0319 20:30:18.963482   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 107/120
	I0319 20:30:19.964865   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 108/120
	I0319 20:30:20.966832   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 109/120
	I0319 20:30:21.969218   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 110/120
	I0319 20:30:22.970699   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 111/120
	I0319 20:30:23.971883   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 112/120
	I0319 20:30:24.973217   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 113/120
	I0319 20:30:25.974538   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 114/120
	I0319 20:30:26.976299   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 115/120
	I0319 20:30:27.977692   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 116/120
	I0319 20:30:28.978748   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 117/120
	I0319 20:30:29.980137   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 118/120
	I0319 20:30:30.981302   58073 main.go:141] libmachine: (embed-certs-421660) Waiting for machine to stop 119/120
	I0319 20:30:31.981997   58073 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0319 20:30:31.982045   58073 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0319 20:30:31.984075   58073 out.go:177] 
	W0319 20:30:31.985478   58073 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0319 20:30:31.985500   58073 out.go:239] * 
	* 
	W0319 20:30:31.988076   58073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:30:31.990279   58073 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-421660 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660: exit status 3 (18.556530293s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:30:50.548532   59180 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host
	E0319 20:30:50.548550   59180 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-421660" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-159022 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-159022 create -f testdata/busybox.yaml: exit status 1 (47.142257ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-159022" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-159022 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 6 (253.887509ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:29:58.104559   58773 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159022" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 6 (264.668909ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:29:58.363318   58803 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159022" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (81.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-159022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-159022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m20.776800241s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-159022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-159022 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-159022 describe deploy/metrics-server -n kube-system: exit status 1 (41.957964ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-159022" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-159022 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 6 (235.841828ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:31:19.423578   59504 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159022" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (81.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130
E0319 20:30:04.834415   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130: exit status 3 (3.167353721s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:30:07.380620   58887 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host
	E0319 20:30:07.380642   58887 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-414130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-414130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153040055s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-414130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130: exit status 3 (3.06851582s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:30:16.600657   58957 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host
	E0319 20:30:16.600696   58957 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.29:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-414130" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-385240 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-385240 --alsologtostderr -v=3: exit status 82 (2m0.510194546s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-385240"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:30:28.941145   59162 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:30:28.941405   59162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:30:28.941415   59162 out.go:304] Setting ErrFile to fd 2...
	I0319 20:30:28.941419   59162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:30:28.941589   59162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:30:28.941820   59162 out.go:298] Setting JSON to false
	I0319 20:30:28.941902   59162 mustload.go:65] Loading cluster: default-k8s-diff-port-385240
	I0319 20:30:28.942243   59162 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:30:28.942303   59162 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:30:28.942465   59162 mustload.go:65] Loading cluster: default-k8s-diff-port-385240
	I0319 20:30:28.942560   59162 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:30:28.942595   59162 stop.go:39] StopHost: default-k8s-diff-port-385240
	I0319 20:30:28.942963   59162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:30:28.943011   59162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:30:28.957138   59162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I0319 20:30:28.957557   59162 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:30:28.958078   59162 main.go:141] libmachine: Using API Version  1
	I0319 20:30:28.958100   59162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:30:28.958493   59162 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:30:28.960896   59162 out.go:177] * Stopping node "default-k8s-diff-port-385240"  ...
	I0319 20:30:28.962249   59162 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0319 20:30:28.962274   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:30:28.962500   59162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0319 20:30:28.962533   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:30:28.965250   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:30:28.965578   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:29:36 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:30:28.965612   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:30:28.965702   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:30:28.965868   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:30:28.966014   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:30:28.966159   59162 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:30:29.070673   59162 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0319 20:30:29.140957   59162 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0319 20:30:29.198241   59162 main.go:141] libmachine: Stopping "default-k8s-diff-port-385240"...
	I0319 20:30:29.198273   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:30:29.199760   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Stop
	I0319 20:30:29.203247   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 0/120
	I0319 20:30:30.204607   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 1/120
	I0319 20:30:31.206590   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 2/120
	I0319 20:30:32.207788   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 3/120
	I0319 20:30:33.209115   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 4/120
	I0319 20:30:34.210484   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 5/120
	I0319 20:30:35.211899   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 6/120
	I0319 20:30:36.213440   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 7/120
	I0319 20:30:37.214668   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 8/120
	I0319 20:30:38.215976   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 9/120
	I0319 20:30:39.218141   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 10/120
	I0319 20:30:40.219464   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 11/120
	I0319 20:30:41.220887   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 12/120
	I0319 20:30:42.222225   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 13/120
	I0319 20:30:43.223617   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 14/120
	I0319 20:30:44.225517   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 15/120
	I0319 20:30:45.226872   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 16/120
	I0319 20:30:46.228136   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 17/120
	I0319 20:30:47.229357   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 18/120
	I0319 20:30:48.230795   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 19/120
	I0319 20:30:49.232918   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 20/120
	I0319 20:30:50.234309   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 21/120
	I0319 20:30:51.235644   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 22/120
	I0319 20:30:52.236841   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 23/120
	I0319 20:30:53.238001   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 24/120
	I0319 20:30:54.239879   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 25/120
	I0319 20:30:55.241096   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 26/120
	I0319 20:30:56.242793   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 27/120
	I0319 20:30:57.244060   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 28/120
	I0319 20:30:58.245449   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 29/120
	I0319 20:30:59.247696   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 30/120
	I0319 20:31:00.248946   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 31/120
	I0319 20:31:01.250724   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 32/120
	I0319 20:31:02.252236   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 33/120
	I0319 20:31:03.253727   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 34/120
	I0319 20:31:04.255736   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 35/120
	I0319 20:31:05.257240   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 36/120
	I0319 20:31:06.258516   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 37/120
	I0319 20:31:07.260000   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 38/120
	I0319 20:31:08.261286   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 39/120
	I0319 20:31:09.263228   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 40/120
	I0319 20:31:10.264650   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 41/120
	I0319 20:31:11.265897   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 42/120
	I0319 20:31:12.267329   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 43/120
	I0319 20:31:13.268576   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 44/120
	I0319 20:31:14.270301   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 45/120
	I0319 20:31:15.271791   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 46/120
	I0319 20:31:16.273015   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 47/120
	I0319 20:31:17.274407   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 48/120
	I0319 20:31:18.275673   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 49/120
	I0319 20:31:19.277177   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 50/120
	I0319 20:31:20.278515   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 51/120
	I0319 20:31:21.279865   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 52/120
	I0319 20:31:22.281419   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 53/120
	I0319 20:31:23.282896   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 54/120
	I0319 20:31:24.285869   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 55/120
	I0319 20:31:25.287627   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 56/120
	I0319 20:31:26.289010   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 57/120
	I0319 20:31:27.290284   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 58/120
	I0319 20:31:28.291641   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 59/120
	I0319 20:31:29.294074   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 60/120
	I0319 20:31:30.295361   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 61/120
	I0319 20:31:31.296717   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 62/120
	I0319 20:31:32.298005   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 63/120
	I0319 20:31:33.299457   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 64/120
	I0319 20:31:34.301449   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 65/120
	I0319 20:31:35.302949   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 66/120
	I0319 20:31:36.304312   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 67/120
	I0319 20:31:37.305623   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 68/120
	I0319 20:31:38.306803   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 69/120
	I0319 20:31:39.309155   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 70/120
	I0319 20:31:40.310764   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 71/120
	I0319 20:31:41.312076   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 72/120
	I0319 20:31:42.313317   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 73/120
	I0319 20:31:43.314598   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 74/120
	I0319 20:31:44.316691   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 75/120
	I0319 20:31:45.317940   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 76/120
	I0319 20:31:46.319235   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 77/120
	I0319 20:31:47.320505   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 78/120
	I0319 20:31:48.321801   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 79/120
	I0319 20:31:49.323936   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 80/120
	I0319 20:31:50.325325   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 81/120
	I0319 20:31:51.326823   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 82/120
	I0319 20:31:52.328228   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 83/120
	I0319 20:31:53.329673   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 84/120
	I0319 20:31:54.331864   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 85/120
	I0319 20:31:55.333267   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 86/120
	I0319 20:31:56.334535   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 87/120
	I0319 20:31:57.336059   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 88/120
	I0319 20:31:58.337381   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 89/120
	I0319 20:31:59.339670   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 90/120
	I0319 20:32:00.341039   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 91/120
	I0319 20:32:01.342530   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 92/120
	I0319 20:32:02.344027   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 93/120
	I0319 20:32:03.345455   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 94/120
	I0319 20:32:04.347440   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 95/120
	I0319 20:32:05.348993   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 96/120
	I0319 20:32:06.350327   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 97/120
	I0319 20:32:07.351752   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 98/120
	I0319 20:32:08.353122   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 99/120
	I0319 20:32:09.355194   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 100/120
	I0319 20:32:10.356791   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 101/120
	I0319 20:32:11.357997   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 102/120
	I0319 20:32:12.359429   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 103/120
	I0319 20:32:13.360766   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 104/120
	I0319 20:32:14.362624   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 105/120
	I0319 20:32:15.364089   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 106/120
	I0319 20:32:16.365502   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 107/120
	I0319 20:32:17.366835   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 108/120
	I0319 20:32:18.368303   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 109/120
	I0319 20:32:19.369913   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 110/120
	I0319 20:32:20.371152   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 111/120
	I0319 20:32:21.372444   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 112/120
	I0319 20:32:22.373725   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 113/120
	I0319 20:32:23.375307   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 114/120
	I0319 20:32:24.377497   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 115/120
	I0319 20:32:25.378865   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 116/120
	I0319 20:32:26.380288   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 117/120
	I0319 20:32:27.381510   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 118/120
	I0319 20:32:28.382912   59162 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for machine to stop 119/120
	I0319 20:32:29.383934   59162 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0319 20:32:29.384006   59162 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0319 20:32:29.385979   59162 out.go:177] 
	W0319 20:32:29.387229   59162 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0319 20:32:29.387243   59162 out.go:239] * 
	* 
	W0319 20:32:29.389779   59162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:32:29.391916   59162 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-385240 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240: exit status 3 (18.663426578s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:32:48.056526   59835 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	E0319 20:32:48.056543   59835 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385240" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660: exit status 3 (3.167693672s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:30:53.716558   59302 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host
	E0319 20:30:53.716584   59302 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-421660 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0319 20:30:53.891213   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-421660 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152852756s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-421660 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660: exit status 3 (3.062979991s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:31:02.932675   59379 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host
	E0319 20:31:02.932694   59379 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.108:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-421660" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (748.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-159022 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-159022 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m24.975147421s)

                                                
                                                
-- stdout --
	* [old-k8s-version-159022] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-159022" primary control-plane node in "old-k8s-version-159022" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:31:21.218112   59621 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:31:21.218221   59621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:31:21.218242   59621 out.go:304] Setting ErrFile to fd 2...
	I0319 20:31:21.218248   59621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:31:21.218490   59621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:31:21.219079   59621 out.go:298] Setting JSON to false
	I0319 20:31:21.220008   59621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7979,"bootTime":1710872302,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:31:21.220069   59621 start.go:139] virtualization: kvm guest
	I0319 20:31:21.222294   59621 out.go:177] * [old-k8s-version-159022] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:31:21.223936   59621 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:31:21.225224   59621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:31:21.223963   59621 notify.go:220] Checking for updates...
	I0319 20:31:21.227674   59621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:31:21.228897   59621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:31:21.230084   59621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:31:21.231282   59621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:31:21.233090   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:31:21.233687   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:31:21.233756   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:31:21.248495   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0319 20:31:21.248876   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:31:21.249363   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:31:21.249389   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:31:21.249698   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:31:21.249877   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:31:21.251838   59621 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0319 20:31:21.253019   59621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:31:21.253303   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:31:21.253367   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:31:21.267727   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0319 20:31:21.268099   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:31:21.268624   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:31:21.268654   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:31:21.268995   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:31:21.269167   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:31:21.303484   59621 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:31:21.304792   59621 start.go:297] selected driver: kvm2
	I0319 20:31:21.304805   59621 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:31:21.304901   59621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:31:21.305516   59621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:31:21.305582   59621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:31:21.320170   59621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:31:21.320551   59621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:31:21.320616   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:31:21.320634   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:31:21.320673   59621 start.go:340] cluster config:
	{Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:31:21.320778   59621 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:31:21.323359   59621 out.go:177] * Starting "old-k8s-version-159022" primary control-plane node in "old-k8s-version-159022" cluster
	I0319 20:31:21.324501   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:31:21.324529   59621 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0319 20:31:21.324536   59621 cache.go:56] Caching tarball of preloaded images
	I0319 20:31:21.324610   59621 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:31:21.324620   59621 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0319 20:31:21.324704   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:31:21.324862   59621 start.go:360] acquireMachinesLock for old-k8s-version-159022: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	* 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	* 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-159022 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (254.885792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-159022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-159022 logs -n 25: (1.604103796s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:33:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:33:00.489344   60008 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:33:00.489594   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489603   60008 out.go:304] Setting ErrFile to fd 2...
	I0319 20:33:00.489607   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489787   60008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:33:00.490297   60008 out.go:298] Setting JSON to false
	I0319 20:33:00.491188   60008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8078,"bootTime":1710872302,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:33:00.491245   60008 start.go:139] virtualization: kvm guest
	I0319 20:33:00.493588   60008 out.go:177] * [default-k8s-diff-port-385240] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:33:00.495329   60008 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:33:00.496506   60008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:33:00.495369   60008 notify.go:220] Checking for updates...
	I0319 20:33:00.499210   60008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:33:00.500494   60008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:33:00.501820   60008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:33:00.503200   60008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:33:00.504837   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:33:00.505191   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.505266   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.519674   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0319 20:33:00.520123   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.520634   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.520656   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.520945   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.521132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.521364   60008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:33:00.521629   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.521660   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.535764   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0319 20:33:00.536105   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.536564   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.536583   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.536890   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.537079   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.572160   60008 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:33:00.573517   60008 start.go:297] selected driver: kvm2
	I0319 20:33:00.573530   60008 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.573663   60008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:33:00.574335   60008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.574423   60008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:33:00.588908   60008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:33:00.589283   60008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:33:00.589354   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:33:00.589375   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:33:00.589419   60008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.589532   60008 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.591715   60008 out.go:177] * Starting "default-k8s-diff-port-385240" primary control-plane node in "default-k8s-diff-port-385240" cluster
	I0319 20:32:58.292485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:01.364553   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:00.593043   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:33:00.593084   60008 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:33:00.593094   60008 cache.go:56] Caching tarball of preloaded images
	I0319 20:33:00.593156   60008 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:33:00.593166   60008 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:33:00.593281   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:33:00.593454   60008 start.go:360] acquireMachinesLock for default-k8s-diff-port-385240: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:33:07.444550   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:10.516480   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:16.596485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:19.668501   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:25.748504   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:28.820525   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:34.900508   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:37.972545   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:44.052478   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:47.124492   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:53.204484   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:56.276536   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:02.356552   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:05.428529   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:11.508540   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:14.580485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:20.660521   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:23.732555   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:29.812516   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:32.884574   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:38.964472   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:42.036583   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:48.116547   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:51.188507   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:54.193037   59415 start.go:364] duration metric: took 3m51.108134555s to acquireMachinesLock for "embed-certs-421660"
	I0319 20:34:54.193108   59415 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:34:54.193120   59415 fix.go:54] fixHost starting: 
	I0319 20:34:54.193458   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:34:54.193487   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:34:54.208614   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0319 20:34:54.209078   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:34:54.209506   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:34:54.209527   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:34:54.209828   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:34:54.209992   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:34:54.210117   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:34:54.211626   59415 fix.go:112] recreateIfNeeded on embed-certs-421660: state=Stopped err=<nil>
	I0319 20:34:54.211661   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	W0319 20:34:54.211820   59415 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:34:54.213989   59415 out.go:177] * Restarting existing kvm2 VM for "embed-certs-421660" ...
	I0319 20:34:54.190431   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:34:54.190483   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.190783   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:34:54.190809   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.191021   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:34:54.192901   59019 machine.go:97] duration metric: took 4m37.398288189s to provisionDockerMachine
	I0319 20:34:54.192939   59019 fix.go:56] duration metric: took 4m37.41948201s for fixHost
	I0319 20:34:54.192947   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 4m37.419503815s
	W0319 20:34:54.192970   59019 start.go:713] error starting host: provision: host is not running
	W0319 20:34:54.193060   59019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0319 20:34:54.193071   59019 start.go:728] Will try again in 5 seconds ...
	I0319 20:34:54.215391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Start
	I0319 20:34:54.215559   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring networks are active...
	I0319 20:34:54.216249   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network default is active
	I0319 20:34:54.216543   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network mk-embed-certs-421660 is active
	I0319 20:34:54.216902   59415 main.go:141] libmachine: (embed-certs-421660) Getting domain xml...
	I0319 20:34:54.217595   59415 main.go:141] libmachine: (embed-certs-421660) Creating domain...
	I0319 20:34:55.407058   59415 main.go:141] libmachine: (embed-certs-421660) Waiting to get IP...
	I0319 20:34:55.407855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.408280   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.408343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.408247   60323 retry.go:31] will retry after 202.616598ms: waiting for machine to come up
	I0319 20:34:55.612753   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.613313   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.613341   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.613247   60323 retry.go:31] will retry after 338.618778ms: waiting for machine to come up
	I0319 20:34:55.953776   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.954230   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.954259   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.954164   60323 retry.go:31] will retry after 389.19534ms: waiting for machine to come up
	I0319 20:34:56.344417   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.344855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.344886   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.344822   60323 retry.go:31] will retry after 555.697854ms: waiting for machine to come up
	I0319 20:34:56.902547   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.902990   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.903017   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.902955   60323 retry.go:31] will retry after 702.649265ms: waiting for machine to come up
	I0319 20:34:57.606823   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:57.607444   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:57.607484   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:57.607388   60323 retry.go:31] will retry after 814.886313ms: waiting for machine to come up
	I0319 20:34:59.194634   59019 start.go:360] acquireMachinesLock for no-preload-414130: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:34:58.424559   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:58.425066   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:58.425088   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:58.425011   60323 retry.go:31] will retry after 948.372294ms: waiting for machine to come up
	I0319 20:34:59.375490   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:59.375857   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:59.375884   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:59.375809   60323 retry.go:31] will retry after 1.206453994s: waiting for machine to come up
	I0319 20:35:00.584114   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:00.584548   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:00.584572   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:00.584496   60323 retry.go:31] will retry after 1.200177378s: waiting for machine to come up
	I0319 20:35:01.786803   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:01.787139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:01.787167   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:01.787085   60323 retry.go:31] will retry after 1.440671488s: waiting for machine to come up
	I0319 20:35:03.229775   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:03.230179   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:03.230216   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:03.230146   60323 retry.go:31] will retry after 2.073090528s: waiting for machine to come up
	I0319 20:35:05.305427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:05.305904   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:05.305930   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:05.305859   60323 retry.go:31] will retry after 3.463824423s: waiting for machine to come up
	I0319 20:35:08.773517   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:08.773911   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:08.773938   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:08.773873   60323 retry.go:31] will retry after 4.159170265s: waiting for machine to come up
	I0319 20:35:12.937475   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937965   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has current primary IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937979   59415 main.go:141] libmachine: (embed-certs-421660) Found IP for machine: 192.168.50.108
	I0319 20:35:12.937987   59415 main.go:141] libmachine: (embed-certs-421660) Reserving static IP address...
	I0319 20:35:12.938372   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.938400   59415 main.go:141] libmachine: (embed-certs-421660) DBG | skip adding static IP to network mk-embed-certs-421660 - found existing host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"}
	I0319 20:35:12.938412   59415 main.go:141] libmachine: (embed-certs-421660) Reserved static IP address: 192.168.50.108
	I0319 20:35:12.938435   59415 main.go:141] libmachine: (embed-certs-421660) Waiting for SSH to be available...
	I0319 20:35:12.938448   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Getting to WaitForSSH function...
	I0319 20:35:12.940523   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.940897   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.940932   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.941037   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH client type: external
	I0319 20:35:12.941069   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa (-rw-------)
	I0319 20:35:12.941102   59415 main.go:141] libmachine: (embed-certs-421660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:12.941116   59415 main.go:141] libmachine: (embed-certs-421660) DBG | About to run SSH command:
	I0319 20:35:12.941128   59415 main.go:141] libmachine: (embed-certs-421660) DBG | exit 0
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:13.068386   59415 main.go:141] libmachine: (embed-certs-421660) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:13.068756   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetConfigRaw
	I0319 20:35:13.069421   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.071751   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072101   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.072133   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072393   59415 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:35:13.072557   59415 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:13.072574   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:13.072781   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.075005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.075369   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.075678   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075816   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075973   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.076134   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.076364   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.076382   59415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:13.188983   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:13.189017   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189291   59415 buildroot.go:166] provisioning hostname "embed-certs-421660"
	I0319 20:35:13.189319   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189503   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.191881   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192190   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.192210   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192389   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.192550   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192696   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192818   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.192989   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.193145   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.193159   59415 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-421660 && echo "embed-certs-421660" | sudo tee /etc/hostname
	I0319 20:35:13.326497   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-421660
	
	I0319 20:35:13.326524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.329344   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.329765   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.330179   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330372   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330547   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.330753   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.330928   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.330943   59415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-421660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-421660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-421660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:13.454265   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:13.454297   59415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:13.454320   59415 buildroot.go:174] setting up certificates
	I0319 20:35:13.454334   59415 provision.go:84] configureAuth start
	I0319 20:35:13.454348   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.454634   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.457258   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457692   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.457723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457834   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.460123   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460436   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.460463   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460587   59415 provision.go:143] copyHostCerts
	I0319 20:35:13.460643   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:13.460652   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:13.460719   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:13.460815   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:13.460822   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:13.460846   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:13.460917   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:13.460924   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:13.460945   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:13.461004   59415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.embed-certs-421660 san=[127.0.0.1 192.168.50.108 embed-certs-421660 localhost minikube]
	I0319 20:35:13.553348   59415 provision.go:177] copyRemoteCerts
	I0319 20:35:13.553399   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:13.553424   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.555729   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556036   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.556071   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556199   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.556406   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.556579   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.556725   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:13.642780   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:35:13.670965   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:13.698335   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:13.724999   59415 provision.go:87] duration metric: took 270.652965ms to configureAuth
	I0319 20:35:13.725022   59415 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:13.725174   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:13.725235   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.727653   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.727969   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.727988   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.728186   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.728410   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728581   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728783   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.728960   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.729113   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.729130   59415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:14.012527   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:14.012554   59415 machine.go:97] duration metric: took 939.982813ms to provisionDockerMachine
	I0319 20:35:14.012568   59415 start.go:293] postStartSetup for "embed-certs-421660" (driver="kvm2")
	I0319 20:35:14.012582   59415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:14.012616   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.012969   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:14.012996   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.015345   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015706   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.015759   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015864   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.016069   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.016269   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.016409   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.105236   59415 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:14.110334   59415 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:14.110363   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:14.110435   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:14.110534   59415 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:14.110623   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:14.120911   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:14.148171   59415 start.go:296] duration metric: took 135.590484ms for postStartSetup
	I0319 20:35:14.148209   59415 fix.go:56] duration metric: took 19.955089617s for fixHost
	I0319 20:35:14.148234   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.150788   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.151165   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151331   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.151514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151667   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151784   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.151953   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:14.152125   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:14.152138   59415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:14.265435   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880514.234420354
	
	I0319 20:35:14.265467   59415 fix.go:216] guest clock: 1710880514.234420354
	I0319 20:35:14.265478   59415 fix.go:229] Guest: 2024-03-19 20:35:14.234420354 +0000 UTC Remote: 2024-03-19 20:35:14.148214105 +0000 UTC m=+251.208119911 (delta=86.206249ms)
	I0319 20:35:14.265507   59415 fix.go:200] guest clock delta is within tolerance: 86.206249ms
	I0319 20:35:14.265516   59415 start.go:83] releasing machines lock for "embed-certs-421660", held for 20.072435424s
	I0319 20:35:14.265554   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.265868   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:14.268494   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268846   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.268874   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269589   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269751   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269833   59415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:14.269884   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.269956   59415 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:14.269972   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.272604   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272771   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272978   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273137   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273140   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273160   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273316   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273337   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273473   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273685   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.273738   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.358033   59415 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:14.385511   59415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:14.542052   59415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:14.549672   59415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:14.549747   59415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:14.569110   59415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:14.569137   59415 start.go:494] detecting cgroup driver to use...
	I0319 20:35:14.569193   59415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:14.586644   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:14.601337   59415 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:14.601407   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:14.616158   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:14.631754   59415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:14.746576   59415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:14.902292   59415 docker.go:233] disabling docker service ...
	I0319 20:35:14.902353   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:14.920787   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:14.938865   59415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:15.078791   59415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:15.214640   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:15.242992   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:15.264698   59415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:15.264755   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.276750   59415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:15.276817   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.288643   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.300368   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.318906   59415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:15.338660   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.351908   59415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.372022   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.384124   59415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:15.395206   59415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:15.395268   59415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:15.411193   59415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:15.422031   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:15.572313   59415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:15.730316   59415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:15.730389   59415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:15.738539   59415 start.go:562] Will wait 60s for crictl version
	I0319 20:35:15.738600   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:35:15.743107   59415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:15.788582   59415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:15.788666   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.819444   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.859201   59415 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:15.860492   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:15.863655   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864032   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:15.864063   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864303   59415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:15.870600   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:15.885694   59415 kubeadm.go:877] updating cluster {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:15.885833   59415 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:15.885890   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:15.924661   59415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:15.924736   59415 ssh_runner.go:195] Run: which lz4
	I0319 20:35:15.929595   59415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:15.934980   59415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:15.935014   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:17.673355   59415 crio.go:462] duration metric: took 1.743798593s to copy over tarball
	I0319 20:35:17.673428   59415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:20.169596   59415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49612841s)
	I0319 20:35:20.169629   59415 crio.go:469] duration metric: took 2.496240167s to extract the tarball
	I0319 20:35:20.169639   59415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:20.208860   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:20.261040   59415 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:35:20.261063   59415 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:35:20.261071   59415 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.29.3 crio true true} ...
	I0319 20:35:20.261162   59415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-421660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:20.261227   59415 ssh_runner.go:195] Run: crio config
	I0319 20:35:20.311322   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:20.311346   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:20.311359   59415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:20.311377   59415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-421660 NodeName:embed-certs-421660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:35:20.311501   59415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-421660"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:20.311560   59415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:35:20.323700   59415 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:20.323776   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:20.334311   59415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0319 20:35:20.352833   59415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:20.372914   59415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0319 20:35:20.391467   59415 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:20.395758   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:20.408698   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:20.532169   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:20.550297   59415 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660 for IP: 192.168.50.108
	I0319 20:35:20.550320   59415 certs.go:194] generating shared ca certs ...
	I0319 20:35:20.550339   59415 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:20.550507   59415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:20.550574   59415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:20.550586   59415 certs.go:256] generating profile certs ...
	I0319 20:35:20.550700   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/client.key
	I0319 20:35:20.550774   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key.e5ca10b2
	I0319 20:35:20.550824   59415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key
	I0319 20:35:20.550954   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:20.550988   59415 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:20.551001   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:20.551037   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:20.551070   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:20.551101   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:20.551155   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:20.552017   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:20.583444   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:20.616935   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:20.673499   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:20.707988   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0319 20:35:20.734672   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:35:20.761302   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:20.792511   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:20.819903   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:20.848361   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:20.878230   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:20.908691   59415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:20.930507   59415 ssh_runner.go:195] Run: openssl version
	I0319 20:35:20.937088   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:20.949229   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954299   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954343   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.960610   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:20.972162   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:20.984137   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989211   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989273   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.995436   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:21.007076   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:21.018552   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024109   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024146   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.030344   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:21.041615   59415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:21.046986   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:21.053533   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:21.060347   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:21.067155   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:21.074006   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:21.080978   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:21.087615   59415 kubeadm.go:391] StartCluster: {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:21.087695   59415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:21.087745   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.131217   59415 cri.go:89] found id: ""
	I0319 20:35:21.131294   59415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:21.143460   59415 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:21.143487   59415 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:21.143493   59415 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:21.143545   59415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:21.156145   59415 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:21.157080   59415 kubeconfig.go:125] found "embed-certs-421660" server: "https://192.168.50.108:8443"
	I0319 20:35:21.158865   59415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:21.171515   59415 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.108
	I0319 20:35:21.171551   59415 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:21.171561   59415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:21.171607   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.221962   59415 cri.go:89] found id: ""
	I0319 20:35:21.222028   59415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:21.239149   59415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:21.250159   59415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:21.250185   59415 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:21.250242   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:21.260035   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:21.260107   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:21.270804   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:21.281041   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:21.281106   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:21.291796   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.301883   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:21.301943   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.313038   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:21.323390   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:21.323462   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:21.333893   59415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:21.344645   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:21.491596   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.349871   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.592803   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.670220   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.802978   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:22.803071   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:23.303983   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.803254   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.846475   59415 api_server.go:72] duration metric: took 1.043496842s to wait for apiserver process to appear ...
	I0319 20:35:23.846509   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:35:23.846532   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:23.847060   59415 api_server.go:269] stopped: https://192.168.50.108:8443/healthz: Get "https://192.168.50.108:8443/healthz": dial tcp 192.168.50.108:8443: connect: connection refused
	I0319 20:35:24.347376   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.456794   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.456826   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.456841   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.492793   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.492827   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.847365   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.857297   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:26.857327   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.346936   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.351748   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:27.351775   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.847430   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.852157   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:35:27.868953   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:35:27.869006   59415 api_server.go:131] duration metric: took 4.022477349s to wait for apiserver health ...
	I0319 20:35:27.869019   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:27.869029   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:27.871083   59415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:35:27.872669   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:35:27.886256   59415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:35:27.912891   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:35:27.928055   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:35:27.928088   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:35:27.928095   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:35:27.928102   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:35:27.928108   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:35:27.928113   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:35:27.928118   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:35:27.928126   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:35:27.928130   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:35:27.928136   59415 system_pods.go:74] duration metric: took 15.221738ms to wait for pod list to return data ...
	I0319 20:35:27.928142   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:35:27.931854   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:35:27.931876   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:35:27.931888   59415 node_conditions.go:105] duration metric: took 3.74189ms to run NodePressure ...
	I0319 20:35:27.931903   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:28.209912   59415 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215315   59415 kubeadm.go:733] kubelet initialised
	I0319 20:35:28.215343   59415 kubeadm.go:734] duration metric: took 5.403708ms waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215353   59415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:28.221636   59415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.230837   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230868   59415 pod_ready.go:81] duration metric: took 9.198177ms for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.230878   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230887   59415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.237452   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237472   59415 pod_ready.go:81] duration metric: took 6.569363ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.237479   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237485   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.242902   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242919   59415 pod_ready.go:81] duration metric: took 5.427924ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.242926   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242931   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.316859   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316889   59415 pod_ready.go:81] duration metric: took 73.950437ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.316901   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316908   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.717107   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717133   59415 pod_ready.go:81] duration metric: took 400.215265ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.717143   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717151   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.117365   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117403   59415 pod_ready.go:81] duration metric: took 400.242952ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.117416   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117427   59415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.517914   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517950   59415 pod_ready.go:81] duration metric: took 400.512217ms for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.517962   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517974   59415 pod_ready.go:38] duration metric: took 1.302609845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:29.518009   59415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:35:29.534665   59415 ops.go:34] apiserver oom_adj: -16
	I0319 20:35:29.534686   59415 kubeadm.go:591] duration metric: took 8.39118752s to restartPrimaryControlPlane
	I0319 20:35:29.534697   59415 kubeadm.go:393] duration metric: took 8.447087595s to StartCluster
	I0319 20:35:29.534713   59415 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.534814   59415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:29.536379   59415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.536620   59415 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:35:29.538397   59415 out.go:177] * Verifying Kubernetes components...
	I0319 20:35:29.536707   59415 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:35:29.536837   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:29.539696   59415 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-421660"
	I0319 20:35:29.539709   59415 addons.go:69] Setting metrics-server=true in profile "embed-certs-421660"
	I0319 20:35:29.539739   59415 addons.go:234] Setting addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:29.539747   59415 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-421660"
	W0319 20:35:29.539751   59415 addons.go:243] addon metrics-server should already be in state true
	W0319 20:35:29.539757   59415 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:35:29.539782   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539786   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539700   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:29.539700   59415 addons.go:69] Setting default-storageclass=true in profile "embed-certs-421660"
	I0319 20:35:29.539882   59415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-421660"
	I0319 20:35:29.540079   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540098   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540107   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540120   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540243   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540282   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.554668   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0319 20:35:29.554742   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0319 20:35:29.554815   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0319 20:35:29.555109   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555148   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555220   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555703   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555708   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555722   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555726   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555828   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555847   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.556077   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556206   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556273   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.556627   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556669   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.556753   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556787   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.559109   59415 addons.go:234] Setting addon default-storageclass=true in "embed-certs-421660"
	W0319 20:35:29.559126   59415 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:35:29.559150   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.559390   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.559425   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.570567   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0319 20:35:29.571010   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.571467   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.571492   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.571831   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.572018   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.573621   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.575889   59415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:35:29.574300   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0319 20:35:29.574529   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0319 20:35:29.577448   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:35:29.577473   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:35:29.577496   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.577913   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.577957   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.578350   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578382   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.578751   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.578877   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578901   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.579318   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.579431   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.579495   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.579509   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.580582   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581050   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.581074   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581166   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.581276   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.583314   59415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:29.581522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.584941   59415 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.584951   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:35:29.584963   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.584980   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.585154   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.587700   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588076   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.588104   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588289   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.588463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.588614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.588791   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.594347   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0319 20:35:29.594626   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.595030   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.595062   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.595384   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.595524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.596984   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.597209   59415 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.597224   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:35:29.597238   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.599955   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.600457   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600533   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.600682   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.600829   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.600926   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.719989   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:29.737348   59415 node_ready.go:35] waiting up to 6m0s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:29.839479   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.839994   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:35:29.840016   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:35:29.852112   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.904335   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:35:29.904358   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:35:29.969646   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:29.969675   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:35:30.031528   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:31.120085   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.280572793s)
	I0319 20:35:31.120135   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120148   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120172   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268019206s)
	I0319 20:35:31.120214   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120229   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120430   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120448   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120457   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120544   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120564   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120588   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120606   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120758   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120788   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120827   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120833   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120841   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.127070   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.127085   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.127287   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.127301   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.138956   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.107385118s)
	I0319 20:35:31.139006   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139027   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139257   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139301   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139319   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139330   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139342   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139546   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139550   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139564   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139579   59415 addons.go:470] Verifying addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:31.141587   59415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:31.142927   59415 addons.go:505] duration metric: took 1.606231661s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:35:31.741584   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.101840   60008 start.go:364] duration metric: took 2m35.508355671s to acquireMachinesLock for "default-k8s-diff-port-385240"
	I0319 20:35:36.101908   60008 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:36.101921   60008 fix.go:54] fixHost starting: 
	I0319 20:35:36.102308   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:36.102352   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:36.118910   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0319 20:35:36.119363   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:36.119926   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:35:36.119957   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:36.120271   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:36.120450   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:36.120614   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:35:36.122085   60008 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385240: state=Stopped err=<nil>
	I0319 20:35:36.122112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	W0319 20:35:36.122284   60008 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:36.124242   60008 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385240" ...
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:33.742461   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.241937   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.743420   59415 node_ready.go:49] node "embed-certs-421660" has status "Ready":"True"
	I0319 20:35:36.743447   59415 node_ready.go:38] duration metric: took 7.006070851s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:36.743458   59415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:36.749810   59415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:36.125778   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Start
	I0319 20:35:36.125974   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring networks are active...
	I0319 20:35:36.126542   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network default is active
	I0319 20:35:36.126934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network mk-default-k8s-diff-port-385240 is active
	I0319 20:35:36.127367   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Getting domain xml...
	I0319 20:35:36.128009   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Creating domain...
	I0319 20:35:37.396589   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting to get IP...
	I0319 20:35:37.397626   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398211   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398294   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.398203   60655 retry.go:31] will retry after 263.730992ms: waiting for machine to come up
	I0319 20:35:37.663811   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664345   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664379   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.664300   60655 retry.go:31] will retry after 308.270868ms: waiting for machine to come up
	I0319 20:35:37.974625   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975061   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975095   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.975027   60655 retry.go:31] will retry after 376.884777ms: waiting for machine to come up
	I0319 20:35:38.353624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354101   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354129   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.354056   60655 retry.go:31] will retry after 419.389718ms: waiting for machine to come up
	I0319 20:35:38.774777   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775271   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775299   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.775224   60655 retry.go:31] will retry after 757.534448ms: waiting for machine to come up
	I0319 20:35:39.534258   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534739   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534766   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:39.534698   60655 retry.go:31] will retry after 921.578914ms: waiting for machine to come up
	I0319 20:35:40.457637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458154   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:40.458092   60655 retry.go:31] will retry after 1.079774724s: waiting for machine to come up
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:38.759504   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.258580   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.539643   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540163   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:41.540113   60655 retry.go:31] will retry after 1.174814283s: waiting for machine to come up
	I0319 20:35:42.716195   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716547   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716576   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:42.716510   60655 retry.go:31] will retry after 1.464439025s: waiting for machine to come up
	I0319 20:35:44.183190   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183673   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183701   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:44.183628   60655 retry.go:31] will retry after 2.304816358s: waiting for machine to come up
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:43.756917   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:44.893540   59415 pod_ready.go:92] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.893576   59415 pod_ready.go:81] duration metric: took 8.143737931s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.893592   59415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903602   59415 pod_ready.go:92] pod "etcd-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.903640   59415 pod_ready.go:81] duration metric: took 10.03087ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903653   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926651   59415 pod_ready.go:92] pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.926682   59415 pod_ready.go:81] duration metric: took 23.020281ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926696   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935080   59415 pod_ready.go:92] pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.935113   59415 pod_ready.go:81] duration metric: took 8.409239ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935126   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947241   59415 pod_ready.go:92] pod "kube-proxy-qvn26" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.947269   59415 pod_ready.go:81] duration metric: took 12.135421ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947280   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155416   59415 pod_ready.go:92] pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:45.155441   59415 pod_ready.go:81] duration metric: took 208.152938ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155460   59415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:47.165059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:46.490600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491092   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491121   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:46.491050   60655 retry.go:31] will retry after 2.347371858s: waiting for machine to come up
	I0319 20:35:48.841516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.841995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.842018   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:48.841956   60655 retry.go:31] will retry after 2.70576525s: waiting for machine to come up
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.662363   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.662389   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.549431   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549931   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549959   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:51.549900   60655 retry.go:31] will retry after 3.429745322s: waiting for machine to come up
	I0319 20:35:54.983382   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.983875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Found IP for machine: 192.168.39.77
	I0319 20:35:54.983908   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserving static IP address...
	I0319 20:35:54.983923   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has current primary IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.984212   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.984240   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserved static IP address: 192.168.39.77
	I0319 20:35:54.984292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | skip adding static IP to network mk-default-k8s-diff-port-385240 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"}
	I0319 20:35:54.984307   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for SSH to be available...
	I0319 20:35:54.984322   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Getting to WaitForSSH function...
	I0319 20:35:54.986280   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986591   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.986624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986722   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH client type: external
	I0319 20:35:54.986752   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa (-rw-------)
	I0319 20:35:54.986783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:54.986796   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | About to run SSH command:
	I0319 20:35:54.986805   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | exit 0
	I0319 20:35:55.112421   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:55.112825   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetConfigRaw
	I0319 20:35:55.113456   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.115976   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116349   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.116377   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116587   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:35:55.116847   60008 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:55.116874   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:55.117099   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.119475   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.119911   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.119947   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.120112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.120312   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120478   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120629   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.120793   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.120970   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.120982   60008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:55.229055   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:55.229090   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229360   60008 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385240"
	I0319 20:35:55.229390   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229594   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.232039   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232371   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.232391   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232574   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.232746   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232866   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232967   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.233087   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.233251   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.233264   60008 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385240 && echo "default-k8s-diff-port-385240" | sudo tee /etc/hostname
	I0319 20:35:55.355708   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385240
	
	I0319 20:35:55.355732   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.358292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358610   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.358641   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358880   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.359105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359267   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359415   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.359545   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.359701   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.359724   60008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:55.479083   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:55.479109   60008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:55.479126   60008 buildroot.go:174] setting up certificates
	I0319 20:35:55.479134   60008 provision.go:84] configureAuth start
	I0319 20:35:55.479143   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.479433   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.482040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482378   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.482408   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482535   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.484637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485035   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.485062   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485212   60008 provision.go:143] copyHostCerts
	I0319 20:35:55.485272   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:55.485283   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:55.485334   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:55.485425   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:55.485434   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:55.485454   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:55.485560   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:55.485569   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:55.485586   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:55.485642   60008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385240 san=[127.0.0.1 192.168.39.77 default-k8s-diff-port-385240 localhost minikube]
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.449710   59019 start.go:364] duration metric: took 57.255031003s to acquireMachinesLock for "no-preload-414130"
	I0319 20:35:56.449774   59019 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:56.449786   59019 fix.go:54] fixHost starting: 
	I0319 20:35:56.450187   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:56.450225   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:56.469771   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0319 20:35:56.470265   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:56.470764   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:35:56.470799   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:56.471187   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:56.471362   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:35:56.471545   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:35:56.473295   59019 fix.go:112] recreateIfNeeded on no-preload-414130: state=Stopped err=<nil>
	I0319 20:35:56.473323   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	W0319 20:35:56.473480   59019 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:56.475296   59019 out.go:177] * Restarting existing kvm2 VM for "no-preload-414130" ...
	I0319 20:35:56.476767   59019 main.go:141] libmachine: (no-preload-414130) Calling .Start
	I0319 20:35:56.476947   59019 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:35:56.477657   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:35:56.478036   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:35:56.478443   59019 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:35:56.479131   59019 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:35:53.663220   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:56.163557   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:55.738705   60008 provision.go:177] copyRemoteCerts
	I0319 20:35:55.738779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:55.738812   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.741292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741618   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.741644   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741835   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.741997   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.742105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.742260   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:55.828017   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:55.854341   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0319 20:35:55.881167   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:55.906768   60008 provision.go:87] duration metric: took 427.621358ms to configureAuth
	I0319 20:35:55.906795   60008 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:55.907007   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:55.907097   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.909518   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.909834   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.909863   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.910008   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.910193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910492   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.910670   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.910835   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.910849   60008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:56.207010   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:56.207036   60008 machine.go:97] duration metric: took 1.090170805s to provisionDockerMachine
	I0319 20:35:56.207049   60008 start.go:293] postStartSetup for "default-k8s-diff-port-385240" (driver="kvm2")
	I0319 20:35:56.207066   60008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:56.207086   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.207410   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:56.207435   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.210075   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210494   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.210526   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210671   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.210828   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.211016   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.211167   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.295687   60008 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:56.300508   60008 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:56.300531   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:56.300601   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:56.300677   60008 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:56.300779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:56.310829   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:56.337456   60008 start.go:296] duration metric: took 130.396402ms for postStartSetup
	I0319 20:35:56.337492   60008 fix.go:56] duration metric: took 20.235571487s for fixHost
	I0319 20:35:56.337516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.339907   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340361   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.340388   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340552   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.340749   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.340888   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.341040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.341198   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:56.341357   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:56.341367   60008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:56.449557   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880556.425761325
	
	I0319 20:35:56.449580   60008 fix.go:216] guest clock: 1710880556.425761325
	I0319 20:35:56.449587   60008 fix.go:229] Guest: 2024-03-19 20:35:56.425761325 +0000 UTC Remote: 2024-03-19 20:35:56.337496936 +0000 UTC m=+175.893119280 (delta=88.264389ms)
	I0319 20:35:56.449619   60008 fix.go:200] guest clock delta is within tolerance: 88.264389ms
	I0319 20:35:56.449624   60008 start.go:83] releasing machines lock for "default-k8s-diff-port-385240", held for 20.347739998s
	I0319 20:35:56.449647   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.449915   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:56.452764   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453172   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.453204   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453363   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.453973   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454275   60008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:56.454328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.454443   60008 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:56.454466   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.457060   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457284   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457383   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457418   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457536   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457555   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457567   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457831   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457977   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458126   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458139   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.458282   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.537675   60008 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:56.564279   60008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:56.708113   60008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:56.716216   60008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:56.716301   60008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:56.738625   60008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:56.738643   60008 start.go:494] detecting cgroup driver to use...
	I0319 20:35:56.738707   60008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:56.756255   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:56.772725   60008 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:56.772785   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:56.793261   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:56.812368   60008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:56.948137   60008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:57.139143   60008 docker.go:233] disabling docker service ...
	I0319 20:35:57.139212   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:57.156414   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:57.173655   60008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:57.313924   60008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:57.459539   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:57.478913   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:57.506589   60008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:57.506663   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.520813   60008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:57.520871   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.534524   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.547833   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.568493   60008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:57.582367   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.595859   60008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.616441   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.633329   60008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:57.648803   60008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:57.648886   60008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:57.667845   60008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:57.680909   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:57.825114   60008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:57.996033   60008 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:57.996118   60008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:58.001875   60008 start.go:562] Will wait 60s for crictl version
	I0319 20:35:58.001947   60008 ssh_runner.go:195] Run: which crictl
	I0319 20:35:58.006570   60008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:58.060545   60008 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:58.060628   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.104858   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.148992   60008 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:58.150343   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:58.153222   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153634   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:58.153663   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153924   60008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:58.158830   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:58.174622   60008 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:58.174760   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:58.174819   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:58.220802   60008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:58.220879   60008 ssh_runner.go:195] Run: which lz4
	I0319 20:35:58.225914   60008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:58.230673   60008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:58.230702   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:59.959612   60008 crio.go:462] duration metric: took 1.733738299s to copy over tarball
	I0319 20:35:59.959694   60008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.782684   59019 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:35:57.783613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:57.784088   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:57.784180   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:57.784077   60806 retry.go:31] will retry after 304.011729ms: waiting for machine to come up
	I0319 20:35:58.089864   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.090398   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.090431   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.090325   60806 retry.go:31] will retry after 268.702281ms: waiting for machine to come up
	I0319 20:35:58.360743   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.361173   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.361201   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.361116   60806 retry.go:31] will retry after 373.34372ms: waiting for machine to come up
	I0319 20:35:58.735810   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.736490   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.736518   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.736439   60806 retry.go:31] will retry after 588.9164ms: waiting for machine to come up
	I0319 20:35:59.327363   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.327908   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.327938   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.327881   60806 retry.go:31] will retry after 623.38165ms: waiting for machine to come up
	I0319 20:35:59.952641   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.953108   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.953138   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.953090   60806 retry.go:31] will retry after 896.417339ms: waiting for machine to come up
	I0319 20:36:00.851032   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:00.851485   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:00.851514   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:00.851435   60806 retry.go:31] will retry after 869.189134ms: waiting for machine to come up
	I0319 20:35:58.168341   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:00.664629   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:02.594104   60008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634373226s)
	I0319 20:36:02.594140   60008 crio.go:469] duration metric: took 2.634502157s to extract the tarball
	I0319 20:36:02.594149   60008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:36:02.635454   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:02.692442   60008 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:36:02.692468   60008 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:36:02.692477   60008 kubeadm.go:928] updating node { 192.168.39.77 8444 v1.29.3 crio true true} ...
	I0319 20:36:02.692613   60008 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-385240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:02.692697   60008 ssh_runner.go:195] Run: crio config
	I0319 20:36:02.749775   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:02.749798   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:02.749809   60008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:02.749828   60008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385240 NodeName:default-k8s-diff-port-385240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:02.749967   60008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-385240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:02.750034   60008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:36:02.760788   60008 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:02.760843   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:02.770999   60008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0319 20:36:02.789881   60008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:36:02.809005   60008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:36:02.831122   60008 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:02.835609   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:02.850186   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:02.990032   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:03.013831   60008 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240 for IP: 192.168.39.77
	I0319 20:36:03.013858   60008 certs.go:194] generating shared ca certs ...
	I0319 20:36:03.013879   60008 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:03.014072   60008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:03.014125   60008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:03.014137   60008 certs.go:256] generating profile certs ...
	I0319 20:36:03.014256   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.key
	I0319 20:36:03.014325   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key.5c19d013
	I0319 20:36:03.014389   60008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key
	I0319 20:36:03.014549   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:03.014602   60008 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:03.014626   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:03.014658   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:03.014691   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:03.014728   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:03.014793   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:03.015673   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:03.070837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:03.115103   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:03.150575   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:03.210934   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 20:36:03.254812   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:36:03.286463   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:03.315596   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:03.347348   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:03.375837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:03.407035   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:03.439726   60008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:03.461675   60008 ssh_runner.go:195] Run: openssl version
	I0319 20:36:03.468238   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:03.482384   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487682   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487739   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.494591   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:03.509455   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:03.522545   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527556   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527617   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.533925   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:03.546851   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:03.559553   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564547   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564595   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.570824   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:03.584339   60008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:03.589542   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:03.595870   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:03.602530   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:03.609086   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:03.615621   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:03.622477   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:03.629097   60008 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:03.629186   60008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:03.629234   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.674484   60008 cri.go:89] found id: ""
	I0319 20:36:03.674568   60008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:03.686995   60008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:03.687020   60008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:03.687026   60008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:03.687094   60008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:03.702228   60008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:03.703334   60008 kubeconfig.go:125] found "default-k8s-diff-port-385240" server: "https://192.168.39.77:8444"
	I0319 20:36:03.705508   60008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:03.719948   60008 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.77
	I0319 20:36:03.719985   60008 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:03.719997   60008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:03.720073   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.761557   60008 cri.go:89] found id: ""
	I0319 20:36:03.761619   60008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:03.781849   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:03.793569   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:03.793601   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:03.793652   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:36:03.804555   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:03.804605   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:03.816728   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:36:03.828247   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:03.828318   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:03.840814   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.853100   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:03.853168   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.867348   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:36:03.879879   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:03.879944   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:03.893810   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:03.906056   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:04.038911   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.173514   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134566983s)
	I0319 20:36:05.173547   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.395951   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.480821   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.721671   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:01.722186   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:01.722212   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:01.722142   60806 retry.go:31] will retry after 997.299446ms: waiting for machine to come up
	I0319 20:36:02.720561   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:02.721007   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:02.721037   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:02.720958   60806 retry.go:31] will retry after 1.64420318s: waiting for machine to come up
	I0319 20:36:04.367668   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:04.368140   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:04.368179   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:04.368083   60806 retry.go:31] will retry after 1.972606192s: waiting for machine to come up
	I0319 20:36:06.342643   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:06.343192   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:06.343236   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:06.343136   60806 retry.go:31] will retry after 2.056060208s: waiting for machine to come up
	I0319 20:36:03.164447   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.665089   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.581797   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:05.581879   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.082565   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.582872   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.628756   60008 api_server.go:72] duration metric: took 1.046965637s to wait for apiserver process to appear ...
	I0319 20:36:06.628786   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:06.628808   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:06.629340   60008 api_server.go:269] stopped: https://192.168.39.77:8444/healthz: Get "https://192.168.39.77:8444/healthz": dial tcp 192.168.39.77:8444: connect: connection refused
	I0319 20:36:07.128890   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.231991   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.232024   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.232039   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.280784   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.280820   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.629356   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.660326   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:09.660434   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.128936   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.139305   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:10.139336   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.629187   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.635922   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:36:10.654111   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:36:10.654137   60008 api_server.go:131] duration metric: took 4.025345365s to wait for apiserver health ...
	I0319 20:36:10.654146   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:10.654154   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:10.656104   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.401478   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:08.402086   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:08.402111   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:08.402001   60806 retry.go:31] will retry after 2.487532232s: waiting for machine to come up
	I0319 20:36:10.891005   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:10.891550   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:10.891591   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:10.891503   60806 retry.go:31] will retry after 3.741447035s: waiting for machine to come up
	I0319 20:36:08.163468   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.165537   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:12.661667   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.657654   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:10.672795   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:10.715527   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:10.728811   60008 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:10.728850   60008 system_pods.go:61] "coredns-76f75df574-hsdk2" [319e5411-97e4-4021-80d0-b39195acb696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:10.728862   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [d10870b0-a0e1-47aa-baf9-07065c1d9142] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:10.728873   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [4925af1b-328f-42ee-b2ef-78b58fcbdd0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:10.728883   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [6dad1c39-3fbc-4364-9ed8-725c0f518191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:10.728889   60008 system_pods.go:61] "kube-proxy-bwj22" [9cc86566-612e-48bc-94c9-a2dad6978c92] Running
	I0319 20:36:10.728896   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [e9c38443-ea8c-4590-94ca-61077f850b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:10.728904   60008 system_pods.go:61] "metrics-server-57f55c9bc5-ddl2q" [ecb174e4-18b0-459e-afb1-137a1f6bdd67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:10.728919   60008 system_pods.go:61] "storage-provisioner" [95fb27b5-769c-4420-8021-3d97942c9f42] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0319 20:36:10.728931   60008 system_pods.go:74] duration metric: took 13.321799ms to wait for pod list to return data ...
	I0319 20:36:10.728944   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:10.743270   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:10.743312   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:10.743326   60008 node_conditions.go:105] duration metric: took 14.37332ms to run NodePressure ...
	I0319 20:36:10.743348   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:11.028786   60008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034096   60008 kubeadm.go:733] kubelet initialised
	I0319 20:36:11.034115   60008 kubeadm.go:734] duration metric: took 5.302543ms waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034122   60008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:11.040118   60008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.046021   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046048   60008 pod_ready.go:81] duration metric: took 5.906752ms for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.046060   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046069   60008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.051677   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051700   60008 pod_ready.go:81] duration metric: took 5.61463ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.051712   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051721   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.057867   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057893   60008 pod_ready.go:81] duration metric: took 6.163114ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.057905   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057912   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:13.065761   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.634526   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:14.635125   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:14.635155   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:14.635074   60806 retry.go:31] will retry after 3.841866145s: waiting for machine to come up
	I0319 20:36:14.662669   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.664913   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:15.565340   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:17.567623   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:19.570775   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.479276   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.479810   59019 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:36:18.479836   59019 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:36:18.479852   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.480232   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.480279   59019 main.go:141] libmachine: (no-preload-414130) DBG | skip adding static IP to network mk-no-preload-414130 - found existing host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"}
	I0319 20:36:18.480297   59019 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:36:18.480319   59019 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:36:18.480336   59019 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:36:18.482725   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483025   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.483052   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483228   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:36:18.483262   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:36:18.483299   59019 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:36:18.483320   59019 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:36:18.483373   59019 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:36:18.612349   59019 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:36:18.612766   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:36:18.613495   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.616106   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616459   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.616498   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616729   59019 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:36:18.616940   59019 machine.go:94] provisionDockerMachine start ...
	I0319 20:36:18.616957   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:18.617150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.619316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619599   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.619620   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619750   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.619895   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620054   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620166   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.620339   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.620508   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.620521   59019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:36:18.729177   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:36:18.729203   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729483   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:36:18.729511   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729728   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.732330   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732633   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.732664   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.732944   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733087   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733211   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.733347   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.733513   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.733528   59019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:36:18.857142   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:36:18.857178   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.860040   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860434   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.860465   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860682   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.860907   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861102   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861283   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.861462   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.861661   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.861685   59019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:36:18.976726   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:36:18.976755   59019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:36:18.976776   59019 buildroot.go:174] setting up certificates
	I0319 20:36:18.976789   59019 provision.go:84] configureAuth start
	I0319 20:36:18.976803   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.977095   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.980523   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.980948   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.980976   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.981150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.983394   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983720   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.983741   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983887   59019 provision.go:143] copyHostCerts
	I0319 20:36:18.983949   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:36:18.983959   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:36:18.984009   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:36:18.984092   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:36:18.984099   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:36:18.984118   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:36:18.984224   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:36:18.984237   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:36:18.984284   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:36:18.984348   59019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:36:19.241365   59019 provision.go:177] copyRemoteCerts
	I0319 20:36:19.241422   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:36:19.241445   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.244060   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244362   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.244388   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244593   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.244781   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.244956   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.245125   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.332749   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:36:19.360026   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:36:19.386680   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:36:19.414673   59019 provision.go:87] duration metric: took 437.87318ms to configureAuth
	I0319 20:36:19.414697   59019 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:36:19.414893   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:36:19.414964   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.417627   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.417949   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.417974   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.418139   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.418351   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418513   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418687   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.418854   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.419099   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.419120   59019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:36:19.712503   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:36:19.712538   59019 machine.go:97] duration metric: took 1.095583423s to provisionDockerMachine
	I0319 20:36:19.712554   59019 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:36:19.712573   59019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:36:19.712595   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.712918   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:36:19.712953   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.715455   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715779   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.715813   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715917   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.716098   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.716307   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.716455   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.801402   59019 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:36:19.806156   59019 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:36:19.806181   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:36:19.806253   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:36:19.806330   59019 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:36:19.806451   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:36:19.818601   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:19.845698   59019 start.go:296] duration metric: took 133.131789ms for postStartSetup
	I0319 20:36:19.845728   59019 fix.go:56] duration metric: took 23.395944884s for fixHost
	I0319 20:36:19.845746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.848343   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848727   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.848760   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848909   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.849090   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849256   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849452   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.849667   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.849843   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.849853   59019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:36:19.957555   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880579.901731357
	
	I0319 20:36:19.957574   59019 fix.go:216] guest clock: 1710880579.901731357
	I0319 20:36:19.957581   59019 fix.go:229] Guest: 2024-03-19 20:36:19.901731357 +0000 UTC Remote: 2024-03-19 20:36:19.845732308 +0000 UTC m=+363.236094224 (delta=55.999049ms)
	I0319 20:36:19.957612   59019 fix.go:200] guest clock delta is within tolerance: 55.999049ms
	I0319 20:36:19.957625   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 23.507874645s
	I0319 20:36:19.957656   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.957889   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:19.960613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.960930   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.960957   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.961108   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961627   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961804   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961883   59019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:36:19.961930   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.961996   59019 ssh_runner.go:195] Run: cat /version.json
	I0319 20:36:19.962022   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.964593   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.964790   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965034   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965057   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965250   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965368   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965397   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965416   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965529   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965611   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965677   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965764   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.965788   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965893   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:20.041410   59019 ssh_runner.go:195] Run: systemctl --version
	I0319 20:36:20.067540   59019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:36:20.214890   59019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:36:20.222680   59019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:36:20.222735   59019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:36:20.239981   59019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:36:20.240003   59019 start.go:494] detecting cgroup driver to use...
	I0319 20:36:20.240066   59019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:36:20.260435   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:36:20.277338   59019 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:36:20.277398   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:36:20.294069   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:36:20.309777   59019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:36:20.443260   59019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:36:20.595476   59019 docker.go:233] disabling docker service ...
	I0319 20:36:20.595552   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:36:20.612622   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:36:20.627717   59019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:36:20.790423   59019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:36:20.915434   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:36:20.932043   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:36:20.953955   59019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:36:20.954026   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.966160   59019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:36:20.966230   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.978217   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.990380   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.002669   59019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:36:21.014880   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.026125   59019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.045239   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.056611   59019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:36:21.067763   59019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:36:21.067818   59019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:36:21.084054   59019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:36:21.095014   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:21.237360   59019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:36:21.396979   59019 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:36:21.397047   59019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:36:21.402456   59019 start.go:562] Will wait 60s for crictl version
	I0319 20:36:21.402509   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.406963   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:36:21.446255   59019 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:36:21.446351   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.477273   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.519196   59019 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:36:21.520520   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:21.523401   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.523792   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:21.523822   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.524033   59019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:36:21.528973   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:21.543033   59019 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:36:21.543154   59019 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:36:21.543185   59019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:21.583439   59019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:36:21.583472   59019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:36:21.583515   59019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.583551   59019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.583566   59019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.583610   59019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.583622   59019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.583646   59019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.583731   59019 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:36:21.583766   59019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585216   59019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585225   59019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.585236   59019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.585210   59019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.585247   59019 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:36:21.585253   59019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.585285   59019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.585297   59019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:19.163241   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:21.165282   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:22.071931   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:24.567506   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.567537   60008 pod_ready.go:81] duration metric: took 13.509614974s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.567553   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573414   60008 pod_ready.go:92] pod "kube-proxy-bwj22" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.573444   60008 pod_ready.go:81] duration metric: took 5.881434ms for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573457   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580429   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.580452   60008 pod_ready.go:81] duration metric: took 6.984808ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580463   60008 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.722682   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.727610   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:36:21.738933   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.740326   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.772871   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.801213   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.829968   59019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:36:21.830008   59019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.830053   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.832291   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.945513   59019 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:36:21.945558   59019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.945612   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945618   59019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:36:21.945651   59019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.945663   59019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:36:21.945687   59019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.945695   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945721   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970009   59019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:36:21.970052   59019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.970079   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.970090   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970100   59019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:36:21.970125   59019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.970149   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970177   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:22.062153   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:36:22.062260   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.063754   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.063840   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.091003   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091052   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:22.091104   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091335   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:22.091372   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0319 20:36:22.091382   59019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091405   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:36:22.091423   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0319 20:36:22.091426   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091475   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:22.096817   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0319 20:36:22.155139   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.155289   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.190022   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0319 20:36:22.190072   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.190166   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.507872   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445006   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.353551966s)
	I0319 20:36:26.445031   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0319 20:36:26.445049   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445063   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (4.289744726s)
	I0319 20:36:26.445095   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445099   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445107   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (4.254920134s)
	I0319 20:36:26.445135   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445176   59019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.937263856s)
	I0319 20:36:26.445228   59019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:36:26.445254   59019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445296   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:23.665322   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.167485   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.588550   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:29.088665   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.407117   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (1.96198659s)
	I0319 20:36:28.407156   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:36:28.407176   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407171   59019 ssh_runner.go:235] Completed: which crictl: (1.961850083s)
	I0319 20:36:28.407212   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407244   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:30.495567   59019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088296063s)
	I0319 20:36:30.495590   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.088358118s)
	I0319 20:36:30.495606   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:36:30.495617   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:36:30.495633   59019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495686   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495735   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:28.662588   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.163637   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.589581   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:34.090180   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.473194   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977482574s)
	I0319 20:36:32.473238   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:36:32.473263   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:32.473260   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.977498716s)
	I0319 20:36:32.473294   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0319 20:36:32.473311   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:34.927774   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.454440131s)
	I0319 20:36:34.927813   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:36:34.927842   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:34.927888   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:33.664608   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.163358   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.588459   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:38.590173   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.512011   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.584091271s)
	I0319 20:36:37.512048   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0319 20:36:37.512077   59019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:37.512134   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:38.589202   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.077040733s)
	I0319 20:36:38.589231   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0319 20:36:38.589263   59019 cache_images.go:123] Successfully loaded all cached images
	I0319 20:36:38.589278   59019 cache_images.go:92] duration metric: took 17.005785801s to LoadCachedImages
	I0319 20:36:38.589291   59019 kubeadm.go:928] updating node { 192.168.72.29 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:36:38.589415   59019 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-414130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:38.589495   59019 ssh_runner.go:195] Run: crio config
	I0319 20:36:38.648312   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:38.648334   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:38.648346   59019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:38.648366   59019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-414130 NodeName:no-preload-414130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:38.648494   59019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-414130"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:38.648554   59019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:36:38.665850   59019 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:38.665928   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:38.678211   59019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0319 20:36:38.701657   59019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:36:38.721498   59019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0319 20:36:38.741159   59019 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:38.745617   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:38.759668   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:38.896211   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:38.916698   59019 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130 for IP: 192.168.72.29
	I0319 20:36:38.916720   59019 certs.go:194] generating shared ca certs ...
	I0319 20:36:38.916748   59019 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:38.916888   59019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:38.916930   59019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:38.916943   59019 certs.go:256] generating profile certs ...
	I0319 20:36:38.917055   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.key
	I0319 20:36:38.917134   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key.2d7d554c
	I0319 20:36:38.917185   59019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key
	I0319 20:36:38.917324   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:38.917381   59019 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:38.917396   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:38.917434   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:38.917469   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:38.917501   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:38.917553   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:38.918130   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:38.959630   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:39.007656   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:39.046666   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:39.078901   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:36:39.116600   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:36:39.158517   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:39.188494   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:39.218770   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:39.247341   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:39.275816   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:39.303434   59019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:39.326445   59019 ssh_runner.go:195] Run: openssl version
	I0319 20:36:39.333373   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:39.346280   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352619   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352686   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.359796   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:39.372480   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:39.384231   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389760   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389818   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.396639   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:39.408887   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:39.421847   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427779   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427848   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.434447   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:39.446945   59019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:39.452219   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:39.458729   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:39.465298   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:39.471931   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:39.478810   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:39.485551   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:39.492084   59019 kubeadm.go:391] StartCluster: {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:39.492210   59019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:39.492297   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.535094   59019 cri.go:89] found id: ""
	I0319 20:36:39.535157   59019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:39.549099   59019 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:39.549123   59019 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:39.549129   59019 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:39.549179   59019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:39.560565   59019 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:39.561570   59019 kubeconfig.go:125] found "no-preload-414130" server: "https://192.168.72.29:8443"
	I0319 20:36:39.563750   59019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:39.578708   59019 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.29
	I0319 20:36:39.578746   59019 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:39.578756   59019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:39.578799   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.620091   59019 cri.go:89] found id: ""
	I0319 20:36:39.620152   59019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:39.639542   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:39.652115   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:39.652133   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:39.652190   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:36:39.664047   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:39.664114   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:39.675218   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:36:39.685482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:39.685533   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:39.695803   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.705482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:39.705538   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.715747   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:36:39.725260   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:39.725324   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:39.735246   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:39.745069   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:39.862945   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.548185   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.794369   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.891458   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.992790   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:40.992871   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.493489   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.164706   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:40.662753   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:42.663084   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.087924   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:43.087987   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.993208   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.040237   59019 api_server.go:72] duration metric: took 1.047447953s to wait for apiserver process to appear ...
	I0319 20:36:42.040278   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:42.040323   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:42.040927   59019 api_server.go:269] stopped: https://192.168.72.29:8443/healthz: Get "https://192.168.72.29:8443/healthz": dial tcp 192.168.72.29:8443: connect: connection refused
	I0319 20:36:42.541457   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.853765   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:44.853796   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:44.853834   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.967607   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:44.967648   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.040791   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.049359   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.049400   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.541024   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.545880   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.545907   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.041423   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.046075   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.046101   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.541147   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.546547   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.546587   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:44.664041   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.163545   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.040899   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.046413   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.046453   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:47.541051   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.547309   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.547334   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.040856   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.046293   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:48.046318   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.540858   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.545311   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:36:48.551941   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:36:48.551962   59019 api_server.go:131] duration metric: took 6.511678507s to wait for apiserver health ...
	I0319 20:36:48.551970   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:48.551976   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:48.553824   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:45.588011   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.589644   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:50.088130   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:48.555094   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:48.566904   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:48.592246   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:48.603249   59019 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:48.603277   59019 system_pods.go:61] "coredns-7db6d8ff4d-t42ph" [bc831304-6e17-452d-8059-22bb46bad525] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:48.603284   59019 system_pods.go:61] "etcd-no-preload-414130" [e2ac0f77-fade-4ac6-a472-58df4040a57d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:48.603294   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [1128c23f-0cc6-4cd4-aeed-32f3d4570e2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:48.603300   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [b03747b6-c3ed-44cf-bcc8-dc2cea408100] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:48.603304   59019 system_pods.go:61] "kube-proxy-dttkh" [23ac1cd6-588b-4745-9c0b-740f9f0e684c] Running
	I0319 20:36:48.603313   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [99fde84c-78d6-4c57-8889-c0d9f3b55a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:48.603318   59019 system_pods.go:61] "metrics-server-569cc877fc-jvlnl" [318246fd-b809-40fa-8aff-78eb33ea10fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:48.603322   59019 system_pods.go:61] "storage-provisioner" [80470118-b092-4ba1-b830-d6f13173434d] Running
	I0319 20:36:48.603327   59019 system_pods.go:74] duration metric: took 11.054488ms to wait for pod list to return data ...
	I0319 20:36:48.603336   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:48.606647   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:48.606667   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:48.606678   59019 node_conditions.go:105] duration metric: took 3.33741ms to run NodePressure ...
	I0319 20:36:48.606693   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:48.888146   59019 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898053   59019 kubeadm.go:733] kubelet initialised
	I0319 20:36:48.898073   59019 kubeadm.go:734] duration metric: took 9.903203ms waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898082   59019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:48.911305   59019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:50.918568   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:49.664061   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.162467   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.588174   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:55.088783   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:52.920852   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:54.419386   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.419410   59019 pod_ready.go:81] duration metric: took 5.508083852s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.419420   59019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926059   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.926081   59019 pod_ready.go:81] duration metric: took 506.65554ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926090   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930519   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.930538   59019 pod_ready.go:81] duration metric: took 4.441479ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930546   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.436969   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.436991   59019 pod_ready.go:81] duration metric: took 506.439126ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.437002   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443096   59019 pod_ready.go:92] pod "kube-proxy-dttkh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.443120   59019 pod_ready.go:81] duration metric: took 6.110267ms for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443132   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465091   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:56.465114   59019 pod_ready.go:81] duration metric: took 1.021974956s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465123   59019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.163556   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.663128   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:57.589188   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.093044   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:58.471804   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.478113   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:58.663274   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.663336   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.663834   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.588018   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.087994   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:02.973736   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.471180   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.161734   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.161995   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.091895   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.588324   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:07.971754   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.974080   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.162947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:11.661800   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.088585   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.588430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:12.475641   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.971849   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:13.662232   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:15.663770   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.588577   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.590602   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.972091   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.972471   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.473526   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.161717   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:20.166272   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:22.661804   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.088608   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:23.587236   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:23.975755   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.472094   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:24.663592   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.664475   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:25.589800   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.087868   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.088398   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:28.472705   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.972037   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:29.162647   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.662382   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:32.088569   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.587686   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:32.972676   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:35.471024   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.161665   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.169093   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.587810   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.590891   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:37.472396   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:39.472831   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.662719   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:40.663337   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:41.088396   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.089545   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.972560   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:44.471857   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.162059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.163324   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:47.662016   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.588501   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:48.087736   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:50.091413   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:46.972679   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.473552   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.665655   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.164565   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.587562   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.587929   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:51.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.471542   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.472616   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.663037   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.666932   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.588855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.087276   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:58.971607   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.474225   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.165581   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.169140   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.087715   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.092449   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.972924   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.473336   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.665054   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.666413   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.588698   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.087857   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.088761   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:08.475109   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.973956   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.162101   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.663715   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:12.588671   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.088453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:13.473479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.971583   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:13.162747   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.163005   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.662157   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.587779   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:19.588889   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:18.473455   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.473644   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.162290   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:22.167423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.589226   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.090617   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:22.473728   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.971753   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.663722   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.665702   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.589574   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.088379   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:38:26.972361   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.471757   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.473307   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:28.666420   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.162701   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.089336   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.587759   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.473519   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:35.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.665536   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.161727   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.087816   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.587570   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:38.471727   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.472995   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.162339   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.162741   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:42.661979   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.588023   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.088381   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:45.091312   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:42.474517   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.971377   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.662325   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.663603   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:47.587861   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:50.086555   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.975287   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.475667   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.164849   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:51.661947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.087983   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.588099   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:51.971311   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:53.971907   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:55.972358   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.162991   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.163316   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.588120   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:59.090000   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:57.973293   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.471215   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:58.662516   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.663859   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.588049   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.589283   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.471981   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:04.472495   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.164344   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:05.665651   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:06.086880   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.088337   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:06.971135   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.971957   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.473482   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.162486   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.662096   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:12.662841   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.587271   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:13.087563   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:15.088454   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:13.972462   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.471598   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:14.664028   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.665749   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.587968   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.087138   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.471693   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.473198   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:19.162423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.164242   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:22.087921   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.089243   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:22.971141   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.971943   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:23.663805   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.162590   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.587708   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.588720   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.972042   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.972160   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:30.973402   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.664060   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.161446   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.092087   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.588849   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.473205   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.971076   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.162170   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.164007   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:37.662820   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.588912   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:38.086910   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:40.089385   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:37.971651   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.973467   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.663277   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.162392   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.587250   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.589855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.472493   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.973064   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.162740   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:45.162232   59415 pod_ready.go:81] duration metric: took 4m0.006756965s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:39:45.162255   59415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:39:45.162262   59415 pod_ready.go:38] duration metric: took 4m8.418792568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:39:45.162277   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:39:45.162309   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.162363   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.219659   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:45.219685   59415 cri.go:89] found id: ""
	I0319 20:39:45.219694   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:45.219737   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.225012   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.225072   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.268783   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.268803   59415 cri.go:89] found id: ""
	I0319 20:39:45.268810   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:45.268875   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.273758   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.273813   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:45.316870   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.316893   59415 cri.go:89] found id: ""
	I0319 20:39:45.316901   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:45.316942   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.321910   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:45.321968   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:45.360077   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:45.360098   59415 cri.go:89] found id: ""
	I0319 20:39:45.360105   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:45.360157   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.365517   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:45.365580   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:45.407686   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:45.407704   59415 cri.go:89] found id: ""
	I0319 20:39:45.407711   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:45.407752   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.412894   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:45.412954   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:45.451930   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:45.451953   59415 cri.go:89] found id: ""
	I0319 20:39:45.451964   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:45.452009   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.456634   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:45.456699   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:45.498575   59415 cri.go:89] found id: ""
	I0319 20:39:45.498601   59415 logs.go:276] 0 containers: []
	W0319 20:39:45.498611   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:45.498619   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:45.498678   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:45.548381   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.548400   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.548405   59415 cri.go:89] found id: ""
	I0319 20:39:45.548411   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:45.548469   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.553470   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.558445   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:45.558471   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.603464   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:45.603490   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.650631   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:45.650663   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:45.668744   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:45.668775   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:45.823596   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:45.823625   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.891879   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:45.891911   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.944237   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:45.944284   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:46.005819   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:46.005848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:46.069819   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.069848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.648008   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.648051   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:46.701035   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.701073   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.753159   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:46.753189   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:46.804730   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:46.804767   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:47.087453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.088165   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:47.471171   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.472374   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:49.351186   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.368780   59415 api_server.go:72] duration metric: took 4m19.832131165s to wait for apiserver process to appear ...
	I0319 20:39:49.368806   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:39:49.368844   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:49.368913   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:49.408912   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:49.408937   59415 cri.go:89] found id: ""
	I0319 20:39:49.408947   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:49.409010   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.414194   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:49.414263   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:49.456271   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.456298   59415 cri.go:89] found id: ""
	I0319 20:39:49.456307   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:49.456374   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.461250   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:49.461316   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:49.510029   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:49.510052   59415 cri.go:89] found id: ""
	I0319 20:39:49.510061   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:49.510119   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.515604   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:49.515667   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:49.561004   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:49.561026   59415 cri.go:89] found id: ""
	I0319 20:39:49.561034   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:49.561100   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.566205   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:49.566276   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:49.610666   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:49.610685   59415 cri.go:89] found id: ""
	I0319 20:39:49.610693   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:49.610735   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.615683   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:49.615730   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:49.657632   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:49.657648   59415 cri.go:89] found id: ""
	I0319 20:39:49.657655   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:49.657711   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.662128   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:49.662172   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:49.699037   59415 cri.go:89] found id: ""
	I0319 20:39:49.699060   59415 logs.go:276] 0 containers: []
	W0319 20:39:49.699068   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:49.699074   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:49.699131   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:49.754331   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:49.754353   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:49.754359   59415 cri.go:89] found id: ""
	I0319 20:39:49.754368   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:49.754437   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.759210   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.763797   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:49.763816   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.818285   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:49.818314   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:49.946232   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:49.946266   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.994160   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:49.994186   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:50.042893   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:50.042923   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:50.099333   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:50.099362   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:50.547046   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:50.547082   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:50.593081   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:50.593111   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:50.632611   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:50.632643   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:50.689610   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:50.689641   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:50.707961   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:50.707997   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:50.752684   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:50.752713   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:50.790114   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:50.790139   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:51.089647   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.588183   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:39:51.972170   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:54.471260   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:56.472093   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.338254   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:39:53.343669   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:39:53.344796   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:39:53.344816   59415 api_server.go:131] duration metric: took 3.976004163s to wait for apiserver health ...
	I0319 20:39:53.344824   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:39:53.344854   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:53.344896   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:53.407914   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.407939   59415 cri.go:89] found id: ""
	I0319 20:39:53.407948   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:53.408000   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.414299   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:53.414360   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:53.466923   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:53.466944   59415 cri.go:89] found id: ""
	I0319 20:39:53.466953   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:53.467006   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.472181   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:53.472247   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:53.511808   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:53.511830   59415 cri.go:89] found id: ""
	I0319 20:39:53.511839   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:53.511900   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.517386   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:53.517445   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:53.560360   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:53.560383   59415 cri.go:89] found id: ""
	I0319 20:39:53.560390   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:53.560433   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.565131   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:53.565181   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:53.611243   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:53.611264   59415 cri.go:89] found id: ""
	I0319 20:39:53.611273   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:53.611326   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.616327   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:53.616391   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:53.656775   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:53.656794   59415 cri.go:89] found id: ""
	I0319 20:39:53.656801   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:53.656846   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.661915   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:53.661966   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:53.700363   59415 cri.go:89] found id: ""
	I0319 20:39:53.700389   59415 logs.go:276] 0 containers: []
	W0319 20:39:53.700396   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:53.700401   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:53.700454   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:53.750337   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:53.750357   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:53.750360   59415 cri.go:89] found id: ""
	I0319 20:39:53.750373   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:53.750426   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.755835   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.761078   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:53.761099   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:53.812898   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:53.812928   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:53.934451   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:53.934482   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.989117   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:53.989148   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:54.386028   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:54.386060   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:54.437864   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:54.437893   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:54.456559   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:54.456584   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:54.506564   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:54.506593   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:54.551120   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:54.551151   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:54.595768   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:54.595794   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:54.637715   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:54.637745   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:54.689666   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:54.689706   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:54.731821   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:54.731851   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:57.287839   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:39:57.287866   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.287870   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.287874   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.287878   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.287881   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.287884   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.287890   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.287894   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.287901   59415 system_pods.go:74] duration metric: took 3.943071923s to wait for pod list to return data ...
	I0319 20:39:57.287907   59415 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:39:57.290568   59415 default_sa.go:45] found service account: "default"
	I0319 20:39:57.290587   59415 default_sa.go:55] duration metric: took 2.674741ms for default service account to be created ...
	I0319 20:39:57.290594   59415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:39:57.296691   59415 system_pods.go:86] 8 kube-system pods found
	I0319 20:39:57.296710   59415 system_pods.go:89] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.296718   59415 system_pods.go:89] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.296722   59415 system_pods.go:89] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.296726   59415 system_pods.go:89] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.296730   59415 system_pods.go:89] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.296734   59415 system_pods.go:89] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.296741   59415 system_pods.go:89] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.296747   59415 system_pods.go:89] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.296753   59415 system_pods.go:126] duration metric: took 6.154905ms to wait for k8s-apps to be running ...
	I0319 20:39:57.296762   59415 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:39:57.296803   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:57.313729   59415 system_svc.go:56] duration metric: took 16.960151ms WaitForService to wait for kubelet
	I0319 20:39:57.313753   59415 kubeadm.go:576] duration metric: took 4m27.777105553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:39:57.313777   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:39:57.316765   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:39:57.316789   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:39:57.316803   59415 node_conditions.go:105] duration metric: took 3.021397ms to run NodePressure ...
	I0319 20:39:57.316813   59415 start.go:240] waiting for startup goroutines ...
	I0319 20:39:57.316820   59415 start.go:245] waiting for cluster config update ...
	I0319 20:39:57.316830   59415 start.go:254] writing updated cluster config ...
	I0319 20:39:57.317087   59415 ssh_runner.go:195] Run: rm -f paused
	I0319 20:39:57.365814   59415 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:39:57.368111   59415 out.go:177] * Done! kubectl is now configured to use "embed-certs-421660" cluster and "default" namespace by default
	I0319 20:39:56.088199   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.088480   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.091027   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.971917   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.972329   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:02.589430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.088313   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:03.474330   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.972928   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:07.587315   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:09.588829   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:08.471254   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:10.472963   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.087905   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:14.589786   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.973661   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:15.471559   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.087489   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.087559   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.473159   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.975538   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:21.090446   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:23.588215   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.581466   60008 pod_ready.go:81] duration metric: took 4m0.000988658s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:24.581495   60008 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:40:24.581512   60008 pod_ready.go:38] duration metric: took 4m13.547382951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:24.581535   60008 kubeadm.go:591] duration metric: took 4m20.894503953s to restartPrimaryControlPlane
	W0319 20:40:24.581583   60008 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:24.581611   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:22.472853   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.972183   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:26.973460   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:28.974127   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:31.475479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:33.973020   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:36.471909   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:38.473008   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:40.975638   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:43.473149   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:45.474566   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.972615   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:50.472593   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:52.973302   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:55.472067   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:56.465422   59019 pod_ready.go:81] duration metric: took 4m0.000285496s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:56.465453   59019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0319 20:40:56.465495   59019 pod_ready.go:38] duration metric: took 4m7.567400515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:56.465521   59019 kubeadm.go:591] duration metric: took 4m16.916387223s to restartPrimaryControlPlane
	W0319 20:40:56.465574   59019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:56.465604   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:56.963018   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.381377433s)
	I0319 20:40:56.963106   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:40:56.982252   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:40:56.994310   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:40:57.004950   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:40:57.004974   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:40:57.005018   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:40:57.015009   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:40:57.015070   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:40:57.026153   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:40:57.036560   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:40:57.036611   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:40:57.047469   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.060137   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:40:57.060188   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.073305   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:40:57.083299   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:40:57.083372   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:40:57.093788   60008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:40:57.352358   60008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:05.910387   60008 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:41:05.910460   60008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:05.910542   60008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:05.910660   60008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:05.910798   60008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:05.910903   60008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:05.912366   60008 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:05.912439   60008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:05.912493   60008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:05.912563   60008 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:05.912614   60008 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:05.912673   60008 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:05.912726   60008 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:05.912809   60008 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:05.912874   60008 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:05.912975   60008 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:05.913082   60008 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:05.913142   60008 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:05.913197   60008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:05.913258   60008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:05.913363   60008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:05.913439   60008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:05.913536   60008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:05.913616   60008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:05.913738   60008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:05.913841   60008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:05.915394   60008 out.go:204]   - Booting up control plane ...
	I0319 20:41:05.915486   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:05.915589   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:05.915682   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:05.915832   60008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:05.915951   60008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:05.916010   60008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:05.916154   60008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:05.916255   60008 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505433 seconds
	I0319 20:41:05.916392   60008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:05.916545   60008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:05.916628   60008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:05.916839   60008 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-385240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:05.916908   60008 kubeadm.go:309] [bootstrap-token] Using token: y9pq78.ls188thm3dr5dool
	I0319 20:41:05.918444   60008 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:05.918567   60008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:05.918654   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:05.918821   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:05.918999   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:05.919147   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:05.919260   60008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:05.919429   60008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:05.919498   60008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:05.919572   60008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:05.919582   60008 kubeadm.go:309] 
	I0319 20:41:05.919665   60008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:05.919678   60008 kubeadm.go:309] 
	I0319 20:41:05.919787   60008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:05.919799   60008 kubeadm.go:309] 
	I0319 20:41:05.919834   60008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:05.919929   60008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:05.920007   60008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:05.920017   60008 kubeadm.go:309] 
	I0319 20:41:05.920102   60008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:05.920112   60008 kubeadm.go:309] 
	I0319 20:41:05.920182   60008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:05.920191   60008 kubeadm.go:309] 
	I0319 20:41:05.920284   60008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:05.920411   60008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:05.920506   60008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:05.920520   60008 kubeadm.go:309] 
	I0319 20:41:05.920648   60008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:05.920762   60008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:05.920771   60008 kubeadm.go:309] 
	I0319 20:41:05.920901   60008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921063   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:05.921099   60008 kubeadm.go:309] 	--control-plane 
	I0319 20:41:05.921105   60008 kubeadm.go:309] 
	I0319 20:41:05.921207   60008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:05.921216   60008 kubeadm.go:309] 
	I0319 20:41:05.921285   60008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921386   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:05.921397   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:41:05.921403   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:05.922921   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:05.924221   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:05.941888   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:06.040294   60008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:06.040378   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.040413   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-385240 minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=default-k8s-diff-port-385240 minikube.k8s.io/primary=true
	I0319 20:41:06.104038   60008 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:06.266168   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.766345   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.266622   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.766418   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.266864   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.766777   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.266420   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.766319   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:10.266990   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:10.766714   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.266839   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.767222   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.266933   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.766390   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.266562   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.766618   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.267159   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.767010   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.266307   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.767002   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.266488   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.766567   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.266789   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.766935   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.266312   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.767202   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.904766   60008 kubeadm.go:1107] duration metric: took 12.864451937s to wait for elevateKubeSystemPrivileges
	W0319 20:41:18.904802   60008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:18.904810   60008 kubeadm.go:393] duration metric: took 5m15.275720912s to StartCluster
	I0319 20:41:18.904826   60008 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.904910   60008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:18.906545   60008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.906817   60008 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:18.908538   60008 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:18.906944   60008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:18.907019   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:41:18.910084   60008 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:18.910100   60008 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910125   60008 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:18.910135   60008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385240"
	W0319 20:41:18.910141   60008 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:18.910255   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910127   60008 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.910313   60008 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:18.910334   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910603   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910635   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910647   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910667   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910692   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910671   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.927094   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0319 20:41:18.927240   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0319 20:41:18.927517   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.927620   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928036   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928059   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928074   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0319 20:41:18.928331   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928360   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928492   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928538   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928737   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928993   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.929009   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.929046   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.929066   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929108   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.929338   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.929862   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929893   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.932815   60008 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.932838   60008 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:18.932865   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.933211   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.933241   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.945888   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0319 20:41:18.946351   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.946842   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.946869   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.947426   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.947600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.947808   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0319 20:41:18.948220   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.948367   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0319 20:41:18.948739   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.948753   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.949222   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.949277   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.951252   60008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:18.949736   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.950173   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.951720   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.952838   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.952813   60008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:18.952917   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:18.952934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.952815   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.953264   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.953460   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.955228   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.957199   60008 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:18.958698   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:18.958715   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:18.958733   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.956502   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.957073   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.958806   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.958845   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.959306   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.959485   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.959783   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.961410   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961775   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.961802   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961893   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.962065   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.962213   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.962369   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.975560   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0319 20:41:18.976026   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.976503   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.976524   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.976893   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.977128   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.978582   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.978862   60008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:18.978881   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:18.978898   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.981356   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981730   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.981762   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.982056   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.982192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.982337   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:19.126985   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:19.188792   60008 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198961   60008 node_ready.go:49] node "default-k8s-diff-port-385240" has status "Ready":"True"
	I0319 20:41:19.198981   60008 node_ready.go:38] duration metric: took 10.160382ms for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198992   60008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:19.209346   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:19.335212   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:19.414291   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:19.506570   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:19.506590   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:19.651892   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:19.651916   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:19.808237   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:19.808282   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:19.924353   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:20.583635   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169310347s)
	I0319 20:41:20.583700   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.583717   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.583981   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.583991   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.584015   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.584027   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.584253   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.584282   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585518   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250274289s)
	I0319 20:41:20.585568   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585584   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.585855   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.585879   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.585888   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585902   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585916   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.586162   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.586168   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.586177   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.609166   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.609183   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.609453   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.609492   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.609502   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.750409   60008 pod_ready.go:92] pod "coredns-76f75df574-4rq6h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:20.750433   60008 pod_ready.go:81] duration metric: took 1.541065393s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.750442   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.869692   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.869719   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.869995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.870000   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870025   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870045   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.870057   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.870336   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870352   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870366   60008 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:20.872093   60008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:41:20.873465   60008 addons.go:505] duration metric: took 1.966520277s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:41:21.260509   60008 pod_ready.go:92] pod "coredns-76f75df574-swxdt" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.260533   60008 pod_ready.go:81] duration metric: took 510.083899ms for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.260543   60008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268298   60008 pod_ready.go:92] pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.268324   60008 pod_ready.go:81] duration metric: took 7.772878ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268335   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274436   60008 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.274461   60008 pod_ready.go:81] duration metric: took 6.117464ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274472   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281324   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.281347   60008 pod_ready.go:81] duration metric: took 6.866088ms for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281367   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.593980   60008 pod_ready.go:92] pod "kube-proxy-j7ghm" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.594001   60008 pod_ready.go:81] duration metric: took 312.62702ms for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.594009   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993321   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.993346   60008 pod_ready.go:81] duration metric: took 399.330556ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993362   60008 pod_ready.go:38] duration metric: took 2.794359581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:21.993375   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:21.993423   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:22.010583   60008 api_server.go:72] duration metric: took 3.10372573s to wait for apiserver process to appear ...
	I0319 20:41:22.010609   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:22.010629   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:41:22.015218   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:41:22.016276   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:41:22.016291   60008 api_server.go:131] duration metric: took 5.6763ms to wait for apiserver health ...
	I0319 20:41:22.016298   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:22.197418   60008 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:22.197454   60008 system_pods.go:61] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.197460   60008 system_pods.go:61] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.197465   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.197470   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.197476   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.197481   60008 system_pods.go:61] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.197485   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.197494   60008 system_pods.go:61] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.197500   60008 system_pods.go:61] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.197514   60008 system_pods.go:74] duration metric: took 181.210964ms to wait for pod list to return data ...
	I0319 20:41:22.197526   60008 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:22.392702   60008 default_sa.go:45] found service account: "default"
	I0319 20:41:22.392738   60008 default_sa.go:55] duration metric: took 195.195704ms for default service account to be created ...
	I0319 20:41:22.392751   60008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:22.595946   60008 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:22.595975   60008 system_pods.go:89] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.595980   60008 system_pods.go:89] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.595985   60008 system_pods.go:89] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.595991   60008 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.595996   60008 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.596006   60008 system_pods.go:89] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.596010   60008 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.596016   60008 system_pods.go:89] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.596022   60008 system_pods.go:89] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.596034   60008 system_pods.go:126] duration metric: took 203.277741ms to wait for k8s-apps to be running ...
	I0319 20:41:22.596043   60008 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:22.596087   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:22.615372   60008 system_svc.go:56] duration metric: took 19.319488ms WaitForService to wait for kubelet
	I0319 20:41:22.615396   60008 kubeadm.go:576] duration metric: took 3.708546167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:22.615413   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:22.793277   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:22.793303   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:22.793313   60008 node_conditions.go:105] duration metric: took 177.89499ms to run NodePressure ...
	I0319 20:41:22.793325   60008 start.go:240] waiting for startup goroutines ...
	I0319 20:41:22.793331   60008 start.go:245] waiting for cluster config update ...
	I0319 20:41:22.793342   60008 start.go:254] writing updated cluster config ...
	I0319 20:41:22.793598   60008 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:22.845339   60008 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:41:22.847429   60008 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-385240" cluster and "default" namespace by default
	I0319 20:41:29.064044   59019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.598411816s)
	I0319 20:41:29.064115   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:29.082924   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:41:29.095050   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:29.106905   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:29.106918   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:29.106962   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:29.118153   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:29.118209   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:29.128632   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:29.140341   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:29.140401   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:29.151723   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.162305   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:29.162365   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.173654   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:29.185155   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:29.185211   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:29.196015   59019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:29.260934   59019 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0319 20:41:29.261054   59019 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:29.412424   59019 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:29.412592   59019 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:29.412759   59019 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:29.636019   59019 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:29.638046   59019 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:29.638158   59019 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:29.638216   59019 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:29.638279   59019 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:29.638331   59019 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:29.645456   59019 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:29.645553   59019 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:29.645610   59019 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:29.645663   59019 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:29.645725   59019 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:29.645788   59019 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:29.645822   59019 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:29.645869   59019 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:29.895850   59019 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:30.248635   59019 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:30.380474   59019 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:30.457908   59019 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:30.585194   59019 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:30.585852   59019 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:30.588394   59019 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:30.590147   59019 out.go:204]   - Booting up control plane ...
	I0319 20:41:30.590241   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:30.590353   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:30.590606   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:30.611645   59019 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:30.614010   59019 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:30.614266   59019 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:30.757838   59019 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 20:41:30.757973   59019 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0319 20:41:31.758717   59019 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001332477s
	I0319 20:41:31.758819   59019 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 20:41:37.261282   59019 kubeadm.go:309] [api-check] The API server is healthy after 5.50238s
	I0319 20:41:37.275017   59019 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:37.299605   59019 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:37.335190   59019 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:37.335449   59019 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-414130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:37.350882   59019 kubeadm.go:309] [bootstrap-token] Using token: 0euy3c.pb7fih13u47u7k5a
	I0319 20:41:37.352692   59019 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:37.352796   59019 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:37.357551   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:37.365951   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:37.369544   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:37.376066   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:37.379284   59019 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:37.669667   59019 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:38.120423   59019 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:38.668937   59019 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:38.670130   59019 kubeadm.go:309] 
	I0319 20:41:38.670236   59019 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:38.670251   59019 kubeadm.go:309] 
	I0319 20:41:38.670339   59019 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:38.670348   59019 kubeadm.go:309] 
	I0319 20:41:38.670369   59019 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:38.670451   59019 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:38.670520   59019 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:38.670530   59019 kubeadm.go:309] 
	I0319 20:41:38.670641   59019 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:38.670653   59019 kubeadm.go:309] 
	I0319 20:41:38.670720   59019 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:38.670731   59019 kubeadm.go:309] 
	I0319 20:41:38.670802   59019 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:38.670916   59019 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:38.671036   59019 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:38.671053   59019 kubeadm.go:309] 
	I0319 20:41:38.671185   59019 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:38.671332   59019 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:38.671351   59019 kubeadm.go:309] 
	I0319 20:41:38.671438   59019 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671588   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:38.671609   59019 kubeadm.go:309] 	--control-plane 
	I0319 20:41:38.671613   59019 kubeadm.go:309] 
	I0319 20:41:38.671684   59019 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:38.671693   59019 kubeadm.go:309] 
	I0319 20:41:38.671758   59019 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671877   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:38.672172   59019 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:38.672197   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:41:38.672212   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:38.674158   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:38.675618   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:38.690458   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:38.712520   59019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:38.712597   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:38.712616   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-414130 minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=no-preload-414130 minikube.k8s.io/primary=true
	I0319 20:41:38.902263   59019 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:38.902364   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.403054   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.903127   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.402786   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.903358   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.403414   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.902829   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.402506   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.903338   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.402784   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.902477   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.403152   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.903190   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.402544   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.903397   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:46.402785   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:46.902656   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.402845   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.903436   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.402511   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.903073   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.402559   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.902914   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.402708   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.903441   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.403416   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.585670   59019 kubeadm.go:1107] duration metric: took 12.873132825s to wait for elevateKubeSystemPrivileges
	W0319 20:41:51.585714   59019 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:51.585724   59019 kubeadm.go:393] duration metric: took 5m12.093644869s to StartCluster
	I0319 20:41:51.585744   59019 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.585835   59019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:51.588306   59019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.588634   59019 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:51.590331   59019 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:51.588755   59019 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:51.588891   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:41:51.590430   59019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-414130"
	I0319 20:41:51.591988   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:51.592020   59019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-414130"
	W0319 20:41:51.592038   59019 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:51.592069   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.590437   59019 addons.go:69] Setting default-storageclass=true in profile "no-preload-414130"
	I0319 20:41:51.590441   59019 addons.go:69] Setting metrics-server=true in profile "no-preload-414130"
	I0319 20:41:51.592098   59019 addons.go:234] Setting addon metrics-server=true in "no-preload-414130"
	W0319 20:41:51.592114   59019 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:51.592129   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.592164   59019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-414130"
	I0319 20:41:51.592450   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592479   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592505   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592532   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.608909   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0319 20:41:51.609383   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.609942   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.609962   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.610565   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.610774   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.612725   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0319 20:41:51.612794   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0319 20:41:51.613141   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.613637   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.613660   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.614121   59019 addons.go:234] Setting addon default-storageclass=true in "no-preload-414130"
	W0319 20:41:51.614139   59019 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:51.614167   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.614214   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.614482   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614512   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614774   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614810   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614876   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.615336   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.615369   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.615703   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.616237   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.616281   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.630175   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0319 20:41:51.630802   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.631279   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.631296   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.631645   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.632322   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.632356   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.634429   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0319 20:41:51.634865   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.635311   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.635324   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.635922   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.636075   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.637997   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.640025   59019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:51.641428   59019 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:51.641445   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:51.641462   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.644316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644838   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.644853   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644875   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0319 20:41:51.645162   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.645300   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.645365   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.645499   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.645613   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.645964   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.645976   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.646447   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.646663   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.648174   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.649872   59019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:51.651152   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:51.651177   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:51.651197   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.654111   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654523   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.654545   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654792   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.654987   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.655156   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.655281   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.656648   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0319 20:41:51.656960   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.657457   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.657471   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.657751   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.657948   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.659265   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.659503   59019 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:51.659517   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:51.659533   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.662039   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662427   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.662447   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662583   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.662757   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.662879   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.662991   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.845584   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:51.876597   59019 node_ready.go:35] waiting up to 6m0s for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886290   59019 node_ready.go:49] node "no-preload-414130" has status "Ready":"True"
	I0319 20:41:51.886308   59019 node_ready.go:38] duration metric: took 9.684309ms for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886315   59019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:51.893456   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:51.976850   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:52.031123   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:52.031144   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:52.133184   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:52.195945   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:52.195968   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:52.270721   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.270745   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:52.407604   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.578113   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578140   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578511   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578524   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.578532   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.578557   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578566   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578809   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578828   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.610849   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.610873   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.611246   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.611251   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.611269   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.342742   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.209525982s)
	I0319 20:41:53.342797   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.342808   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343131   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343159   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343163   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.343174   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.343194   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343486   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343503   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343525   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.450430   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.450458   59019 pod_ready.go:81] duration metric: took 1.556981953s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.450478   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459425   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.459454   59019 pod_ready.go:81] duration metric: took 8.967211ms for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459467   59019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495144   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.495164   59019 pod_ready.go:81] duration metric: took 35.690498ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495173   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520382   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.520412   59019 pod_ready.go:81] duration metric: took 25.23062ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520426   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530859   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.530889   59019 pod_ready.go:81] duration metric: took 10.451233ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530903   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.545946   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.13830463s)
	I0319 20:41:53.545994   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546009   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546304   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546323   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546333   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546350   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546678   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546695   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546706   59019 addons.go:470] Verifying addon metrics-server=true in "no-preload-414130"
	I0319 20:41:53.546764   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.548523   59019 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0319 20:41:53.549990   59019 addons.go:505] duration metric: took 1.961237309s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0319 20:41:53.881082   59019 pod_ready.go:92] pod "kube-proxy-m7m4h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.881107   59019 pod_ready.go:81] duration metric: took 350.197776ms for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.881116   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283891   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:54.283924   59019 pod_ready.go:81] duration metric: took 402.800741ms for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283936   59019 pod_ready.go:38] duration metric: took 2.397611991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:54.283953   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:54.284016   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:54.304606   59019 api_server.go:72] duration metric: took 2.715931012s to wait for apiserver process to appear ...
	I0319 20:41:54.304629   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:54.304651   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:41:54.309292   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:41:54.310195   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:41:54.310215   59019 api_server.go:131] duration metric: took 5.579162ms to wait for apiserver health ...
	I0319 20:41:54.310225   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:54.488441   59019 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:54.488475   59019 system_pods.go:61] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.488482   59019 system_pods.go:61] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.488487   59019 system_pods.go:61] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.488491   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.488496   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.488500   59019 system_pods.go:61] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.488505   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.488514   59019 system_pods.go:61] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.488520   59019 system_pods.go:61] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.488530   59019 system_pods.go:74] duration metric: took 178.298577ms to wait for pod list to return data ...
	I0319 20:41:54.488543   59019 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:54.679537   59019 default_sa.go:45] found service account: "default"
	I0319 20:41:54.679560   59019 default_sa.go:55] duration metric: took 191.010696ms for default service account to be created ...
	I0319 20:41:54.679569   59019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:54.884163   59019 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:54.884197   59019 system_pods.go:89] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.884205   59019 system_pods.go:89] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.884211   59019 system_pods.go:89] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.884217   59019 system_pods.go:89] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.884223   59019 system_pods.go:89] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.884230   59019 system_pods.go:89] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.884236   59019 system_pods.go:89] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.884246   59019 system_pods.go:89] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.884268   59019 system_pods.go:89] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.884281   59019 system_pods.go:126] duration metric: took 204.70598ms to wait for k8s-apps to be running ...
	I0319 20:41:54.884294   59019 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:54.884348   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:54.901838   59019 system_svc.go:56] duration metric: took 17.536645ms WaitForService to wait for kubelet
	I0319 20:41:54.901869   59019 kubeadm.go:576] duration metric: took 3.313198534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:54.901887   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:55.080463   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:55.080485   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:55.080495   59019 node_conditions.go:105] duration metric: took 178.603035ms to run NodePressure ...
	I0319 20:41:55.080507   59019 start.go:240] waiting for startup goroutines ...
	I0319 20:41:55.080513   59019 start.go:245] waiting for cluster config update ...
	I0319 20:41:55.080523   59019 start.go:254] writing updated cluster config ...
	I0319 20:41:55.080753   59019 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:55.130477   59019 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0319 20:41:55.133906   59019 out.go:177] * Done! kubectl is now configured to use "no-preload-414130" cluster and "default" namespace by default
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 
	
	
	==> CRI-O <==
	Mar 19 20:43:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:47.966095208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881027966075420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fe8204e-3058-435c-9f25-dfabcc2fb180 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:47.966802460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31f9f1d0-2b2b-498c-84bd-1e0aa538d104 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:47.966856024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31f9f1d0-2b2b-498c-84bd-1e0aa538d104 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:47.966887237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=31f9f1d0-2b2b-498c-84bd-1e0aa538d104 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.006128054Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a35d8d5-cc67-4e1f-8ee3-9c3e4839be2e name=/runtime.v1.RuntimeService/Version
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.006201831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a35d8d5-cc67-4e1f-8ee3-9c3e4839be2e name=/runtime.v1.RuntimeService/Version
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.007871313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10230a0c-20fb-47b1-9a14-b6e9beaad862 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.008220448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881028008200732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10230a0c-20fb-47b1-9a14-b6e9beaad862 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.008889885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a04d96b-4fb6-4c98-afbd-ef7d7ed4ca5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.008953447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a04d96b-4fb6-4c98-afbd-ef7d7ed4ca5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.008988306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2a04d96b-4fb6-4c98-afbd-ef7d7ed4ca5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.046584808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5839efb-b5d0-4ffa-a127-8bbc248f8ce4 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.046652531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5839efb-b5d0-4ffa-a127-8bbc248f8ce4 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.047917530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f402066b-329f-4763-8bd0-649ea3e811fa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.048458931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881028048432336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f402066b-329f-4763-8bd0-649ea3e811fa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.049057837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a974a1d-d5df-414d-8b4a-538181b7eef3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.049139479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a974a1d-d5df-414d-8b4a-538181b7eef3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.049177845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7a974a1d-d5df-414d-8b4a-538181b7eef3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.091077959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4de2d2a2-ab42-4ca8-ae77-99ab7d590d2b name=/runtime.v1.RuntimeService/Version
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.091225859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4de2d2a2-ab42-4ca8-ae77-99ab7d590d2b name=/runtime.v1.RuntimeService/Version
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.093129630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a89c3e9-687e-4d98-8429-a374d0caa9c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.093611878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881028093592296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a89c3e9-687e-4d98-8429-a374d0caa9c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.094396001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=714331e0-0c21-47f2-8809-f8ae67e31dbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.094470535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=714331e0-0c21-47f2-8809-f8ae67e31dbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:43:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:43:48.094505795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=714331e0-0c21-47f2-8809-f8ae67e31dbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar19 20:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055341] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752911] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.544871] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.711243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.190356] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.060609] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066334] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.201088] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.130943] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.285680] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +7.272629] systemd-fstab-generator[845]: Ignoring "noauto" option for root device
	[  +0.072227] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.223992] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +10.810145] kauditd_printk_skb: 46 callbacks suppressed
	[Mar19 20:39] systemd-fstab-generator[4992]: Ignoring "noauto" option for root device
	[Mar19 20:41] systemd-fstab-generator[5275]: Ignoring "noauto" option for root device
	[  +0.073912] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:43:48 up 8 min,  0 users,  load average: 0.16, 0.17, 0.09
	Linux old-k8s-version-159022 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bb2430, 0xc000bdc060)
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: goroutine 159 [runnable]:
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: runtime.Gosched(...)
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /usr/local/go/src/runtime/proc.go:271
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bde960, 0x0, 0x0)
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0000c4540)
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 19 20:43:45 old-k8s-version-159022 kubelet[5457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 19 20:43:45 old-k8s-version-159022 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 19 20:43:45 old-k8s-version-159022 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 19 20:43:46 old-k8s-version-159022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 19 20:43:46 old-k8s-version-159022 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 19 20:43:46 old-k8s-version-159022 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 19 20:43:46 old-k8s-version-159022 kubelet[5524]: I0319 20:43:46.542363    5524 server.go:416] Version: v1.20.0
	Mar 19 20:43:46 old-k8s-version-159022 kubelet[5524]: I0319 20:43:46.542637    5524 server.go:837] Client rotation is on, will bootstrap in background
	Mar 19 20:43:46 old-k8s-version-159022 kubelet[5524]: I0319 20:43:46.544730    5524 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 19 20:43:46 old-k8s-version-159022 kubelet[5524]: W0319 20:43:46.545794    5524 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 19 20:43:46 old-k8s-version-159022 kubelet[5524]: I0319 20:43:46.546150    5524 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (245.368001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-159022" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (748.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240: exit status 3 (3.163796138s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:32:51.220650   59898 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	E0319 20:32:51.220667   59898 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-385240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-385240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153197004s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-385240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240: exit status 3 (3.062536656s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 20:33:00.436704   59967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	E0319 20:33:00.436721   59967 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-385240" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0319 20:40:04.834451   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-421660 -n embed-certs-421660
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-19 20:48:57.949597568 +0000 UTC m=+6262.070217336
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-421660 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-421660 logs -n 25: (2.118498014s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:33:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:33:00.489344   60008 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:33:00.489594   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489603   60008 out.go:304] Setting ErrFile to fd 2...
	I0319 20:33:00.489607   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489787   60008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:33:00.490297   60008 out.go:298] Setting JSON to false
	I0319 20:33:00.491188   60008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8078,"bootTime":1710872302,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:33:00.491245   60008 start.go:139] virtualization: kvm guest
	I0319 20:33:00.493588   60008 out.go:177] * [default-k8s-diff-port-385240] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:33:00.495329   60008 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:33:00.496506   60008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:33:00.495369   60008 notify.go:220] Checking for updates...
	I0319 20:33:00.499210   60008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:33:00.500494   60008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:33:00.501820   60008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:33:00.503200   60008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:33:00.504837   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:33:00.505191   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.505266   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.519674   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0319 20:33:00.520123   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.520634   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.520656   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.520945   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.521132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.521364   60008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:33:00.521629   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.521660   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.535764   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0319 20:33:00.536105   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.536564   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.536583   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.536890   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.537079   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.572160   60008 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:33:00.573517   60008 start.go:297] selected driver: kvm2
	I0319 20:33:00.573530   60008 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.573663   60008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:33:00.574335   60008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.574423   60008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:33:00.588908   60008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:33:00.589283   60008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:33:00.589354   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:33:00.589375   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:33:00.589419   60008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.589532   60008 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.591715   60008 out.go:177] * Starting "default-k8s-diff-port-385240" primary control-plane node in "default-k8s-diff-port-385240" cluster
	I0319 20:32:58.292485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:01.364553   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:00.593043   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:33:00.593084   60008 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:33:00.593094   60008 cache.go:56] Caching tarball of preloaded images
	I0319 20:33:00.593156   60008 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:33:00.593166   60008 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:33:00.593281   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:33:00.593454   60008 start.go:360] acquireMachinesLock for default-k8s-diff-port-385240: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:33:07.444550   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:10.516480   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:16.596485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:19.668501   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:25.748504   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:28.820525   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:34.900508   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:37.972545   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:44.052478   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:47.124492   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:53.204484   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:56.276536   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:02.356552   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:05.428529   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:11.508540   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:14.580485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:20.660521   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:23.732555   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:29.812516   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:32.884574   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:38.964472   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:42.036583   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:48.116547   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:51.188507   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:54.193037   59415 start.go:364] duration metric: took 3m51.108134555s to acquireMachinesLock for "embed-certs-421660"
	I0319 20:34:54.193108   59415 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:34:54.193120   59415 fix.go:54] fixHost starting: 
	I0319 20:34:54.193458   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:34:54.193487   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:34:54.208614   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0319 20:34:54.209078   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:34:54.209506   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:34:54.209527   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:34:54.209828   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:34:54.209992   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:34:54.210117   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:34:54.211626   59415 fix.go:112] recreateIfNeeded on embed-certs-421660: state=Stopped err=<nil>
	I0319 20:34:54.211661   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	W0319 20:34:54.211820   59415 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:34:54.213989   59415 out.go:177] * Restarting existing kvm2 VM for "embed-certs-421660" ...
	I0319 20:34:54.190431   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:34:54.190483   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.190783   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:34:54.190809   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.191021   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:34:54.192901   59019 machine.go:97] duration metric: took 4m37.398288189s to provisionDockerMachine
	I0319 20:34:54.192939   59019 fix.go:56] duration metric: took 4m37.41948201s for fixHost
	I0319 20:34:54.192947   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 4m37.419503815s
	W0319 20:34:54.192970   59019 start.go:713] error starting host: provision: host is not running
	W0319 20:34:54.193060   59019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0319 20:34:54.193071   59019 start.go:728] Will try again in 5 seconds ...
	I0319 20:34:54.215391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Start
	I0319 20:34:54.215559   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring networks are active...
	I0319 20:34:54.216249   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network default is active
	I0319 20:34:54.216543   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network mk-embed-certs-421660 is active
	I0319 20:34:54.216902   59415 main.go:141] libmachine: (embed-certs-421660) Getting domain xml...
	I0319 20:34:54.217595   59415 main.go:141] libmachine: (embed-certs-421660) Creating domain...
	I0319 20:34:55.407058   59415 main.go:141] libmachine: (embed-certs-421660) Waiting to get IP...
	I0319 20:34:55.407855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.408280   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.408343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.408247   60323 retry.go:31] will retry after 202.616598ms: waiting for machine to come up
	I0319 20:34:55.612753   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.613313   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.613341   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.613247   60323 retry.go:31] will retry after 338.618778ms: waiting for machine to come up
	I0319 20:34:55.953776   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.954230   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.954259   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.954164   60323 retry.go:31] will retry after 389.19534ms: waiting for machine to come up
	I0319 20:34:56.344417   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.344855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.344886   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.344822   60323 retry.go:31] will retry after 555.697854ms: waiting for machine to come up
	I0319 20:34:56.902547   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.902990   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.903017   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.902955   60323 retry.go:31] will retry after 702.649265ms: waiting for machine to come up
	I0319 20:34:57.606823   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:57.607444   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:57.607484   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:57.607388   60323 retry.go:31] will retry after 814.886313ms: waiting for machine to come up
	I0319 20:34:59.194634   59019 start.go:360] acquireMachinesLock for no-preload-414130: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:34:58.424559   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:58.425066   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:58.425088   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:58.425011   60323 retry.go:31] will retry after 948.372294ms: waiting for machine to come up
	I0319 20:34:59.375490   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:59.375857   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:59.375884   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:59.375809   60323 retry.go:31] will retry after 1.206453994s: waiting for machine to come up
	I0319 20:35:00.584114   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:00.584548   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:00.584572   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:00.584496   60323 retry.go:31] will retry after 1.200177378s: waiting for machine to come up
	I0319 20:35:01.786803   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:01.787139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:01.787167   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:01.787085   60323 retry.go:31] will retry after 1.440671488s: waiting for machine to come up
	I0319 20:35:03.229775   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:03.230179   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:03.230216   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:03.230146   60323 retry.go:31] will retry after 2.073090528s: waiting for machine to come up
	I0319 20:35:05.305427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:05.305904   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:05.305930   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:05.305859   60323 retry.go:31] will retry after 3.463824423s: waiting for machine to come up
	I0319 20:35:08.773517   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:08.773911   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:08.773938   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:08.773873   60323 retry.go:31] will retry after 4.159170265s: waiting for machine to come up
	I0319 20:35:12.937475   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937965   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has current primary IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937979   59415 main.go:141] libmachine: (embed-certs-421660) Found IP for machine: 192.168.50.108
	I0319 20:35:12.937987   59415 main.go:141] libmachine: (embed-certs-421660) Reserving static IP address...
	I0319 20:35:12.938372   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.938400   59415 main.go:141] libmachine: (embed-certs-421660) DBG | skip adding static IP to network mk-embed-certs-421660 - found existing host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"}
	I0319 20:35:12.938412   59415 main.go:141] libmachine: (embed-certs-421660) Reserved static IP address: 192.168.50.108
	I0319 20:35:12.938435   59415 main.go:141] libmachine: (embed-certs-421660) Waiting for SSH to be available...
	I0319 20:35:12.938448   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Getting to WaitForSSH function...
	I0319 20:35:12.940523   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.940897   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.940932   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.941037   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH client type: external
	I0319 20:35:12.941069   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa (-rw-------)
	I0319 20:35:12.941102   59415 main.go:141] libmachine: (embed-certs-421660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:12.941116   59415 main.go:141] libmachine: (embed-certs-421660) DBG | About to run SSH command:
	I0319 20:35:12.941128   59415 main.go:141] libmachine: (embed-certs-421660) DBG | exit 0
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:13.068386   59415 main.go:141] libmachine: (embed-certs-421660) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:13.068756   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetConfigRaw
	I0319 20:35:13.069421   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.071751   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072101   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.072133   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072393   59415 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:35:13.072557   59415 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:13.072574   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:13.072781   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.075005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.075369   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.075678   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075816   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075973   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.076134   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.076364   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.076382   59415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:13.188983   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:13.189017   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189291   59415 buildroot.go:166] provisioning hostname "embed-certs-421660"
	I0319 20:35:13.189319   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189503   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.191881   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192190   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.192210   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192389   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.192550   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192696   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192818   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.192989   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.193145   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.193159   59415 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-421660 && echo "embed-certs-421660" | sudo tee /etc/hostname
	I0319 20:35:13.326497   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-421660
	
	I0319 20:35:13.326524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.329344   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.329765   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.330179   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330372   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330547   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.330753   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.330928   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.330943   59415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-421660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-421660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-421660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:13.454265   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:13.454297   59415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:13.454320   59415 buildroot.go:174] setting up certificates
	I0319 20:35:13.454334   59415 provision.go:84] configureAuth start
	I0319 20:35:13.454348   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.454634   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.457258   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457692   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.457723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457834   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.460123   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460436   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.460463   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460587   59415 provision.go:143] copyHostCerts
	I0319 20:35:13.460643   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:13.460652   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:13.460719   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:13.460815   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:13.460822   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:13.460846   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:13.460917   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:13.460924   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:13.460945   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:13.461004   59415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.embed-certs-421660 san=[127.0.0.1 192.168.50.108 embed-certs-421660 localhost minikube]
	I0319 20:35:13.553348   59415 provision.go:177] copyRemoteCerts
	I0319 20:35:13.553399   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:13.553424   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.555729   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556036   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.556071   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556199   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.556406   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.556579   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.556725   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:13.642780   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:35:13.670965   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:13.698335   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:13.724999   59415 provision.go:87] duration metric: took 270.652965ms to configureAuth
	I0319 20:35:13.725022   59415 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:13.725174   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:13.725235   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.727653   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.727969   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.727988   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.728186   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.728410   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728581   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728783   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.728960   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.729113   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.729130   59415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:14.012527   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:14.012554   59415 machine.go:97] duration metric: took 939.982813ms to provisionDockerMachine
	I0319 20:35:14.012568   59415 start.go:293] postStartSetup for "embed-certs-421660" (driver="kvm2")
	I0319 20:35:14.012582   59415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:14.012616   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.012969   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:14.012996   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.015345   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015706   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.015759   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015864   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.016069   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.016269   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.016409   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.105236   59415 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:14.110334   59415 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:14.110363   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:14.110435   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:14.110534   59415 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:14.110623   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:14.120911   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:14.148171   59415 start.go:296] duration metric: took 135.590484ms for postStartSetup
	I0319 20:35:14.148209   59415 fix.go:56] duration metric: took 19.955089617s for fixHost
	I0319 20:35:14.148234   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.150788   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.151165   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151331   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.151514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151667   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151784   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.151953   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:14.152125   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:14.152138   59415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:14.265435   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880514.234420354
	
	I0319 20:35:14.265467   59415 fix.go:216] guest clock: 1710880514.234420354
	I0319 20:35:14.265478   59415 fix.go:229] Guest: 2024-03-19 20:35:14.234420354 +0000 UTC Remote: 2024-03-19 20:35:14.148214105 +0000 UTC m=+251.208119911 (delta=86.206249ms)
	I0319 20:35:14.265507   59415 fix.go:200] guest clock delta is within tolerance: 86.206249ms
	I0319 20:35:14.265516   59415 start.go:83] releasing machines lock for "embed-certs-421660", held for 20.072435424s
	I0319 20:35:14.265554   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.265868   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:14.268494   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268846   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.268874   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269589   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269751   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269833   59415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:14.269884   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.269956   59415 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:14.269972   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.272604   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272771   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272978   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273137   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273140   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273160   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273316   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273337   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273473   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273685   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.273738   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.358033   59415 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:14.385511   59415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:14.542052   59415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:14.549672   59415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:14.549747   59415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:14.569110   59415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:14.569137   59415 start.go:494] detecting cgroup driver to use...
	I0319 20:35:14.569193   59415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:14.586644   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:14.601337   59415 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:14.601407   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:14.616158   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:14.631754   59415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:14.746576   59415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:14.902292   59415 docker.go:233] disabling docker service ...
	I0319 20:35:14.902353   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:14.920787   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:14.938865   59415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:15.078791   59415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:15.214640   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:15.242992   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:15.264698   59415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:15.264755   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.276750   59415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:15.276817   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.288643   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.300368   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.318906   59415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:15.338660   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.351908   59415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.372022   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.384124   59415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:15.395206   59415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:15.395268   59415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:15.411193   59415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:15.422031   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:15.572313   59415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:15.730316   59415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:15.730389   59415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:15.738539   59415 start.go:562] Will wait 60s for crictl version
	I0319 20:35:15.738600   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:35:15.743107   59415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:15.788582   59415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:15.788666   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.819444   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.859201   59415 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:15.860492   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:15.863655   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864032   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:15.864063   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864303   59415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:15.870600   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:15.885694   59415 kubeadm.go:877] updating cluster {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:15.885833   59415 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:15.885890   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:15.924661   59415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:15.924736   59415 ssh_runner.go:195] Run: which lz4
	I0319 20:35:15.929595   59415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:15.934980   59415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:15.935014   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:17.673355   59415 crio.go:462] duration metric: took 1.743798593s to copy over tarball
	I0319 20:35:17.673428   59415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:20.169596   59415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49612841s)
	I0319 20:35:20.169629   59415 crio.go:469] duration metric: took 2.496240167s to extract the tarball
	I0319 20:35:20.169639   59415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:20.208860   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:20.261040   59415 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:35:20.261063   59415 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:35:20.261071   59415 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.29.3 crio true true} ...
	I0319 20:35:20.261162   59415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-421660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:20.261227   59415 ssh_runner.go:195] Run: crio config
	I0319 20:35:20.311322   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:20.311346   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:20.311359   59415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:20.311377   59415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-421660 NodeName:embed-certs-421660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:35:20.311501   59415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-421660"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:20.311560   59415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:35:20.323700   59415 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:20.323776   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:20.334311   59415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0319 20:35:20.352833   59415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:20.372914   59415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0319 20:35:20.391467   59415 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:20.395758   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:20.408698   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:20.532169   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:20.550297   59415 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660 for IP: 192.168.50.108
	I0319 20:35:20.550320   59415 certs.go:194] generating shared ca certs ...
	I0319 20:35:20.550339   59415 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:20.550507   59415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:20.550574   59415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:20.550586   59415 certs.go:256] generating profile certs ...
	I0319 20:35:20.550700   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/client.key
	I0319 20:35:20.550774   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key.e5ca10b2
	I0319 20:35:20.550824   59415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key
	I0319 20:35:20.550954   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:20.550988   59415 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:20.551001   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:20.551037   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:20.551070   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:20.551101   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:20.551155   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:20.552017   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:20.583444   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:20.616935   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:20.673499   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:20.707988   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0319 20:35:20.734672   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:35:20.761302   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:20.792511   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:20.819903   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:20.848361   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:20.878230   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:20.908691   59415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:20.930507   59415 ssh_runner.go:195] Run: openssl version
	I0319 20:35:20.937088   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:20.949229   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954299   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954343   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.960610   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:20.972162   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:20.984137   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989211   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989273   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.995436   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:21.007076   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:21.018552   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024109   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024146   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.030344   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:21.041615   59415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:21.046986   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:21.053533   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:21.060347   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:21.067155   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:21.074006   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:21.080978   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:21.087615   59415 kubeadm.go:391] StartCluster: {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:21.087695   59415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:21.087745   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.131217   59415 cri.go:89] found id: ""
	I0319 20:35:21.131294   59415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:21.143460   59415 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:21.143487   59415 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:21.143493   59415 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:21.143545   59415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:21.156145   59415 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:21.157080   59415 kubeconfig.go:125] found "embed-certs-421660" server: "https://192.168.50.108:8443"
	I0319 20:35:21.158865   59415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:21.171515   59415 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.108
	I0319 20:35:21.171551   59415 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:21.171561   59415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:21.171607   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.221962   59415 cri.go:89] found id: ""
	I0319 20:35:21.222028   59415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:21.239149   59415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:21.250159   59415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:21.250185   59415 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:21.250242   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:21.260035   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:21.260107   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:21.270804   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:21.281041   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:21.281106   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:21.291796   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.301883   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:21.301943   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.313038   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:21.323390   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:21.323462   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:21.333893   59415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:21.344645   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:21.491596   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.349871   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.592803   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.670220   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.802978   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:22.803071   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:23.303983   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.803254   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.846475   59415 api_server.go:72] duration metric: took 1.043496842s to wait for apiserver process to appear ...
	I0319 20:35:23.846509   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:35:23.846532   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:23.847060   59415 api_server.go:269] stopped: https://192.168.50.108:8443/healthz: Get "https://192.168.50.108:8443/healthz": dial tcp 192.168.50.108:8443: connect: connection refused
	I0319 20:35:24.347376   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.456794   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.456826   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.456841   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.492793   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.492827   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.847365   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.857297   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:26.857327   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.346936   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.351748   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:27.351775   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.847430   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.852157   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:35:27.868953   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:35:27.869006   59415 api_server.go:131] duration metric: took 4.022477349s to wait for apiserver health ...
	I0319 20:35:27.869019   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:27.869029   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:27.871083   59415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:35:27.872669   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:35:27.886256   59415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:35:27.912891   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:35:27.928055   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:35:27.928088   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:35:27.928095   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:35:27.928102   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:35:27.928108   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:35:27.928113   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:35:27.928118   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:35:27.928126   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:35:27.928130   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:35:27.928136   59415 system_pods.go:74] duration metric: took 15.221738ms to wait for pod list to return data ...
	I0319 20:35:27.928142   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:35:27.931854   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:35:27.931876   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:35:27.931888   59415 node_conditions.go:105] duration metric: took 3.74189ms to run NodePressure ...
	I0319 20:35:27.931903   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:28.209912   59415 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215315   59415 kubeadm.go:733] kubelet initialised
	I0319 20:35:28.215343   59415 kubeadm.go:734] duration metric: took 5.403708ms waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215353   59415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:28.221636   59415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.230837   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230868   59415 pod_ready.go:81] duration metric: took 9.198177ms for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.230878   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230887   59415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.237452   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237472   59415 pod_ready.go:81] duration metric: took 6.569363ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.237479   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237485   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.242902   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242919   59415 pod_ready.go:81] duration metric: took 5.427924ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.242926   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242931   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.316859   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316889   59415 pod_ready.go:81] duration metric: took 73.950437ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.316901   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316908   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.717107   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717133   59415 pod_ready.go:81] duration metric: took 400.215265ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.717143   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717151   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.117365   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117403   59415 pod_ready.go:81] duration metric: took 400.242952ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.117416   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117427   59415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.517914   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517950   59415 pod_ready.go:81] duration metric: took 400.512217ms for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.517962   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517974   59415 pod_ready.go:38] duration metric: took 1.302609845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:29.518009   59415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:35:29.534665   59415 ops.go:34] apiserver oom_adj: -16
	I0319 20:35:29.534686   59415 kubeadm.go:591] duration metric: took 8.39118752s to restartPrimaryControlPlane
	I0319 20:35:29.534697   59415 kubeadm.go:393] duration metric: took 8.447087595s to StartCluster
	I0319 20:35:29.534713   59415 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.534814   59415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:29.536379   59415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.536620   59415 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:35:29.538397   59415 out.go:177] * Verifying Kubernetes components...
	I0319 20:35:29.536707   59415 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:35:29.536837   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:29.539696   59415 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-421660"
	I0319 20:35:29.539709   59415 addons.go:69] Setting metrics-server=true in profile "embed-certs-421660"
	I0319 20:35:29.539739   59415 addons.go:234] Setting addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:29.539747   59415 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-421660"
	W0319 20:35:29.539751   59415 addons.go:243] addon metrics-server should already be in state true
	W0319 20:35:29.539757   59415 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:35:29.539782   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539786   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539700   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:29.539700   59415 addons.go:69] Setting default-storageclass=true in profile "embed-certs-421660"
	I0319 20:35:29.539882   59415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-421660"
	I0319 20:35:29.540079   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540098   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540107   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540120   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540243   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540282   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.554668   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0319 20:35:29.554742   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0319 20:35:29.554815   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0319 20:35:29.555109   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555148   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555220   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555703   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555708   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555722   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555726   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555828   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555847   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.556077   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556206   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556273   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.556627   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556669   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.556753   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556787   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.559109   59415 addons.go:234] Setting addon default-storageclass=true in "embed-certs-421660"
	W0319 20:35:29.559126   59415 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:35:29.559150   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.559390   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.559425   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.570567   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0319 20:35:29.571010   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.571467   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.571492   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.571831   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.572018   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.573621   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.575889   59415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:35:29.574300   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0319 20:35:29.574529   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0319 20:35:29.577448   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:35:29.577473   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:35:29.577496   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.577913   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.577957   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.578350   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578382   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.578751   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.578877   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578901   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.579318   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.579431   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.579495   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.579509   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.580582   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581050   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.581074   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581166   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.581276   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.583314   59415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:29.581522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.584941   59415 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.584951   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:35:29.584963   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.584980   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.585154   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.587700   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588076   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.588104   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588289   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.588463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.588614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.588791   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.594347   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0319 20:35:29.594626   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.595030   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.595062   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.595384   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.595524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.596984   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.597209   59415 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.597224   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:35:29.597238   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.599955   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.600457   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600533   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.600682   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.600829   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.600926   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.719989   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:29.737348   59415 node_ready.go:35] waiting up to 6m0s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:29.839479   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.839994   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:35:29.840016   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:35:29.852112   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.904335   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:35:29.904358   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:35:29.969646   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:29.969675   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:35:30.031528   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:31.120085   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.280572793s)
	I0319 20:35:31.120135   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120148   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120172   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268019206s)
	I0319 20:35:31.120214   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120229   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120430   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120448   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120457   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120544   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120564   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120588   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120606   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120758   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120788   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120827   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120833   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120841   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.127070   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.127085   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.127287   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.127301   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.138956   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.107385118s)
	I0319 20:35:31.139006   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139027   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139257   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139301   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139319   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139330   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139342   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139546   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139550   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139564   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139579   59415 addons.go:470] Verifying addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:31.141587   59415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:31.142927   59415 addons.go:505] duration metric: took 1.606231661s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:35:31.741584   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.101840   60008 start.go:364] duration metric: took 2m35.508355671s to acquireMachinesLock for "default-k8s-diff-port-385240"
	I0319 20:35:36.101908   60008 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:36.101921   60008 fix.go:54] fixHost starting: 
	I0319 20:35:36.102308   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:36.102352   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:36.118910   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0319 20:35:36.119363   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:36.119926   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:35:36.119957   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:36.120271   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:36.120450   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:36.120614   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:35:36.122085   60008 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385240: state=Stopped err=<nil>
	I0319 20:35:36.122112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	W0319 20:35:36.122284   60008 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:36.124242   60008 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385240" ...
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:33.742461   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.241937   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.743420   59415 node_ready.go:49] node "embed-certs-421660" has status "Ready":"True"
	I0319 20:35:36.743447   59415 node_ready.go:38] duration metric: took 7.006070851s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:36.743458   59415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:36.749810   59415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:36.125778   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Start
	I0319 20:35:36.125974   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring networks are active...
	I0319 20:35:36.126542   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network default is active
	I0319 20:35:36.126934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network mk-default-k8s-diff-port-385240 is active
	I0319 20:35:36.127367   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Getting domain xml...
	I0319 20:35:36.128009   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Creating domain...
	I0319 20:35:37.396589   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting to get IP...
	I0319 20:35:37.397626   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398211   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398294   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.398203   60655 retry.go:31] will retry after 263.730992ms: waiting for machine to come up
	I0319 20:35:37.663811   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664345   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664379   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.664300   60655 retry.go:31] will retry after 308.270868ms: waiting for machine to come up
	I0319 20:35:37.974625   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975061   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975095   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.975027   60655 retry.go:31] will retry after 376.884777ms: waiting for machine to come up
	I0319 20:35:38.353624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354101   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354129   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.354056   60655 retry.go:31] will retry after 419.389718ms: waiting for machine to come up
	I0319 20:35:38.774777   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775271   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775299   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.775224   60655 retry.go:31] will retry after 757.534448ms: waiting for machine to come up
	I0319 20:35:39.534258   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534739   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534766   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:39.534698   60655 retry.go:31] will retry after 921.578914ms: waiting for machine to come up
	I0319 20:35:40.457637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458154   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:40.458092   60655 retry.go:31] will retry after 1.079774724s: waiting for machine to come up
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:38.759504   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.258580   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.539643   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540163   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:41.540113   60655 retry.go:31] will retry after 1.174814283s: waiting for machine to come up
	I0319 20:35:42.716195   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716547   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716576   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:42.716510   60655 retry.go:31] will retry after 1.464439025s: waiting for machine to come up
	I0319 20:35:44.183190   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183673   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183701   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:44.183628   60655 retry.go:31] will retry after 2.304816358s: waiting for machine to come up
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:43.756917   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:44.893540   59415 pod_ready.go:92] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.893576   59415 pod_ready.go:81] duration metric: took 8.143737931s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.893592   59415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903602   59415 pod_ready.go:92] pod "etcd-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.903640   59415 pod_ready.go:81] duration metric: took 10.03087ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903653   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926651   59415 pod_ready.go:92] pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.926682   59415 pod_ready.go:81] duration metric: took 23.020281ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926696   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935080   59415 pod_ready.go:92] pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.935113   59415 pod_ready.go:81] duration metric: took 8.409239ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935126   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947241   59415 pod_ready.go:92] pod "kube-proxy-qvn26" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.947269   59415 pod_ready.go:81] duration metric: took 12.135421ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947280   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155416   59415 pod_ready.go:92] pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:45.155441   59415 pod_ready.go:81] duration metric: took 208.152938ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155460   59415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:47.165059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:46.490600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491092   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491121   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:46.491050   60655 retry.go:31] will retry after 2.347371858s: waiting for machine to come up
	I0319 20:35:48.841516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.841995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.842018   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:48.841956   60655 retry.go:31] will retry after 2.70576525s: waiting for machine to come up
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.662363   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.662389   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.549431   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549931   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549959   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:51.549900   60655 retry.go:31] will retry after 3.429745322s: waiting for machine to come up
	I0319 20:35:54.983382   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.983875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Found IP for machine: 192.168.39.77
	I0319 20:35:54.983908   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserving static IP address...
	I0319 20:35:54.983923   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has current primary IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.984212   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.984240   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserved static IP address: 192.168.39.77
	I0319 20:35:54.984292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | skip adding static IP to network mk-default-k8s-diff-port-385240 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"}
	I0319 20:35:54.984307   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for SSH to be available...
	I0319 20:35:54.984322   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Getting to WaitForSSH function...
	I0319 20:35:54.986280   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986591   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.986624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986722   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH client type: external
	I0319 20:35:54.986752   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa (-rw-------)
	I0319 20:35:54.986783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:54.986796   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | About to run SSH command:
	I0319 20:35:54.986805   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | exit 0
	I0319 20:35:55.112421   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:55.112825   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetConfigRaw
	I0319 20:35:55.113456   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.115976   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116349   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.116377   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116587   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:35:55.116847   60008 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:55.116874   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:55.117099   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.119475   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.119911   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.119947   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.120112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.120312   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120478   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120629   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.120793   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.120970   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.120982   60008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:55.229055   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:55.229090   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229360   60008 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385240"
	I0319 20:35:55.229390   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229594   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.232039   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232371   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.232391   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232574   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.232746   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232866   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232967   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.233087   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.233251   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.233264   60008 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385240 && echo "default-k8s-diff-port-385240" | sudo tee /etc/hostname
	I0319 20:35:55.355708   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385240
	
	I0319 20:35:55.355732   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.358292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358610   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.358641   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358880   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.359105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359267   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359415   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.359545   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.359701   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.359724   60008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:55.479083   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:55.479109   60008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:55.479126   60008 buildroot.go:174] setting up certificates
	I0319 20:35:55.479134   60008 provision.go:84] configureAuth start
	I0319 20:35:55.479143   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.479433   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.482040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482378   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.482408   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482535   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.484637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485035   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.485062   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485212   60008 provision.go:143] copyHostCerts
	I0319 20:35:55.485272   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:55.485283   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:55.485334   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:55.485425   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:55.485434   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:55.485454   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:55.485560   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:55.485569   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:55.485586   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:55.485642   60008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385240 san=[127.0.0.1 192.168.39.77 default-k8s-diff-port-385240 localhost minikube]
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.449710   59019 start.go:364] duration metric: took 57.255031003s to acquireMachinesLock for "no-preload-414130"
	I0319 20:35:56.449774   59019 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:56.449786   59019 fix.go:54] fixHost starting: 
	I0319 20:35:56.450187   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:56.450225   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:56.469771   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0319 20:35:56.470265   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:56.470764   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:35:56.470799   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:56.471187   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:56.471362   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:35:56.471545   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:35:56.473295   59019 fix.go:112] recreateIfNeeded on no-preload-414130: state=Stopped err=<nil>
	I0319 20:35:56.473323   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	W0319 20:35:56.473480   59019 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:56.475296   59019 out.go:177] * Restarting existing kvm2 VM for "no-preload-414130" ...
	I0319 20:35:56.476767   59019 main.go:141] libmachine: (no-preload-414130) Calling .Start
	I0319 20:35:56.476947   59019 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:35:56.477657   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:35:56.478036   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:35:56.478443   59019 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:35:56.479131   59019 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:35:53.663220   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:56.163557   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:55.738705   60008 provision.go:177] copyRemoteCerts
	I0319 20:35:55.738779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:55.738812   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.741292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741618   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.741644   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741835   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.741997   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.742105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.742260   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:55.828017   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:55.854341   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0319 20:35:55.881167   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:55.906768   60008 provision.go:87] duration metric: took 427.621358ms to configureAuth
	I0319 20:35:55.906795   60008 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:55.907007   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:55.907097   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.909518   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.909834   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.909863   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.910008   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.910193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910492   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.910670   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.910835   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.910849   60008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:56.207010   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:56.207036   60008 machine.go:97] duration metric: took 1.090170805s to provisionDockerMachine
	I0319 20:35:56.207049   60008 start.go:293] postStartSetup for "default-k8s-diff-port-385240" (driver="kvm2")
	I0319 20:35:56.207066   60008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:56.207086   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.207410   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:56.207435   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.210075   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210494   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.210526   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210671   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.210828   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.211016   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.211167   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.295687   60008 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:56.300508   60008 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:56.300531   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:56.300601   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:56.300677   60008 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:56.300779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:56.310829   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:56.337456   60008 start.go:296] duration metric: took 130.396402ms for postStartSetup
	I0319 20:35:56.337492   60008 fix.go:56] duration metric: took 20.235571487s for fixHost
	I0319 20:35:56.337516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.339907   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340361   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.340388   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340552   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.340749   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.340888   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.341040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.341198   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:56.341357   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:56.341367   60008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:56.449557   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880556.425761325
	
	I0319 20:35:56.449580   60008 fix.go:216] guest clock: 1710880556.425761325
	I0319 20:35:56.449587   60008 fix.go:229] Guest: 2024-03-19 20:35:56.425761325 +0000 UTC Remote: 2024-03-19 20:35:56.337496936 +0000 UTC m=+175.893119280 (delta=88.264389ms)
	I0319 20:35:56.449619   60008 fix.go:200] guest clock delta is within tolerance: 88.264389ms
	I0319 20:35:56.449624   60008 start.go:83] releasing machines lock for "default-k8s-diff-port-385240", held for 20.347739998s
	I0319 20:35:56.449647   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.449915   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:56.452764   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453172   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.453204   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453363   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.453973   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454275   60008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:56.454328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.454443   60008 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:56.454466   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.457060   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457284   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457383   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457418   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457536   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457555   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457567   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457831   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457977   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458126   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458139   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.458282   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.537675   60008 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:56.564279   60008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:56.708113   60008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:56.716216   60008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:56.716301   60008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:56.738625   60008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:56.738643   60008 start.go:494] detecting cgroup driver to use...
	I0319 20:35:56.738707   60008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:56.756255   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:56.772725   60008 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:56.772785   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:56.793261   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:56.812368   60008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:56.948137   60008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:57.139143   60008 docker.go:233] disabling docker service ...
	I0319 20:35:57.139212   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:57.156414   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:57.173655   60008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:57.313924   60008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:57.459539   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:57.478913   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:57.506589   60008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:57.506663   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.520813   60008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:57.520871   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.534524   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.547833   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.568493   60008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:57.582367   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.595859   60008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.616441   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.633329   60008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:57.648803   60008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:57.648886   60008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:57.667845   60008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:57.680909   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:57.825114   60008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:57.996033   60008 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:57.996118   60008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:58.001875   60008 start.go:562] Will wait 60s for crictl version
	I0319 20:35:58.001947   60008 ssh_runner.go:195] Run: which crictl
	I0319 20:35:58.006570   60008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:58.060545   60008 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:58.060628   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.104858   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.148992   60008 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:58.150343   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:58.153222   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153634   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:58.153663   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153924   60008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:58.158830   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:58.174622   60008 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:58.174760   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:58.174819   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:58.220802   60008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:58.220879   60008 ssh_runner.go:195] Run: which lz4
	I0319 20:35:58.225914   60008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:58.230673   60008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:58.230702   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:59.959612   60008 crio.go:462] duration metric: took 1.733738299s to copy over tarball
	I0319 20:35:59.959694   60008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.782684   59019 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:35:57.783613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:57.784088   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:57.784180   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:57.784077   60806 retry.go:31] will retry after 304.011729ms: waiting for machine to come up
	I0319 20:35:58.089864   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.090398   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.090431   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.090325   60806 retry.go:31] will retry after 268.702281ms: waiting for machine to come up
	I0319 20:35:58.360743   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.361173   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.361201   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.361116   60806 retry.go:31] will retry after 373.34372ms: waiting for machine to come up
	I0319 20:35:58.735810   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.736490   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.736518   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.736439   60806 retry.go:31] will retry after 588.9164ms: waiting for machine to come up
	I0319 20:35:59.327363   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.327908   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.327938   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.327881   60806 retry.go:31] will retry after 623.38165ms: waiting for machine to come up
	I0319 20:35:59.952641   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.953108   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.953138   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.953090   60806 retry.go:31] will retry after 896.417339ms: waiting for machine to come up
	I0319 20:36:00.851032   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:00.851485   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:00.851514   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:00.851435   60806 retry.go:31] will retry after 869.189134ms: waiting for machine to come up
	I0319 20:35:58.168341   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:00.664629   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:02.594104   60008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634373226s)
	I0319 20:36:02.594140   60008 crio.go:469] duration metric: took 2.634502157s to extract the tarball
	I0319 20:36:02.594149   60008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:36:02.635454   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:02.692442   60008 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:36:02.692468   60008 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:36:02.692477   60008 kubeadm.go:928] updating node { 192.168.39.77 8444 v1.29.3 crio true true} ...
	I0319 20:36:02.692613   60008 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-385240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:02.692697   60008 ssh_runner.go:195] Run: crio config
	I0319 20:36:02.749775   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:02.749798   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:02.749809   60008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:02.749828   60008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385240 NodeName:default-k8s-diff-port-385240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:02.749967   60008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-385240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:02.750034   60008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:36:02.760788   60008 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:02.760843   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:02.770999   60008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0319 20:36:02.789881   60008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:36:02.809005   60008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:36:02.831122   60008 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:02.835609   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:02.850186   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:02.990032   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:03.013831   60008 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240 for IP: 192.168.39.77
	I0319 20:36:03.013858   60008 certs.go:194] generating shared ca certs ...
	I0319 20:36:03.013879   60008 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:03.014072   60008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:03.014125   60008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:03.014137   60008 certs.go:256] generating profile certs ...
	I0319 20:36:03.014256   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.key
	I0319 20:36:03.014325   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key.5c19d013
	I0319 20:36:03.014389   60008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key
	I0319 20:36:03.014549   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:03.014602   60008 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:03.014626   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:03.014658   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:03.014691   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:03.014728   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:03.014793   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:03.015673   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:03.070837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:03.115103   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:03.150575   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:03.210934   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 20:36:03.254812   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:36:03.286463   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:03.315596   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:03.347348   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:03.375837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:03.407035   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:03.439726   60008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:03.461675   60008 ssh_runner.go:195] Run: openssl version
	I0319 20:36:03.468238   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:03.482384   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487682   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487739   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.494591   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:03.509455   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:03.522545   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527556   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527617   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.533925   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:03.546851   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:03.559553   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564547   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564595   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.570824   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:03.584339   60008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:03.589542   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:03.595870   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:03.602530   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:03.609086   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:03.615621   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:03.622477   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:03.629097   60008 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:03.629186   60008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:03.629234   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.674484   60008 cri.go:89] found id: ""
	I0319 20:36:03.674568   60008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:03.686995   60008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:03.687020   60008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:03.687026   60008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:03.687094   60008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:03.702228   60008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:03.703334   60008 kubeconfig.go:125] found "default-k8s-diff-port-385240" server: "https://192.168.39.77:8444"
	I0319 20:36:03.705508   60008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:03.719948   60008 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.77
	I0319 20:36:03.719985   60008 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:03.719997   60008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:03.720073   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.761557   60008 cri.go:89] found id: ""
	I0319 20:36:03.761619   60008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:03.781849   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:03.793569   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:03.793601   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:03.793652   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:36:03.804555   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:03.804605   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:03.816728   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:36:03.828247   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:03.828318   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:03.840814   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.853100   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:03.853168   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.867348   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:36:03.879879   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:03.879944   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:03.893810   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:03.906056   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:04.038911   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.173514   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134566983s)
	I0319 20:36:05.173547   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.395951   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.480821   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.721671   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:01.722186   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:01.722212   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:01.722142   60806 retry.go:31] will retry after 997.299446ms: waiting for machine to come up
	I0319 20:36:02.720561   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:02.721007   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:02.721037   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:02.720958   60806 retry.go:31] will retry after 1.64420318s: waiting for machine to come up
	I0319 20:36:04.367668   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:04.368140   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:04.368179   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:04.368083   60806 retry.go:31] will retry after 1.972606192s: waiting for machine to come up
	I0319 20:36:06.342643   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:06.343192   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:06.343236   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:06.343136   60806 retry.go:31] will retry after 2.056060208s: waiting for machine to come up
	I0319 20:36:03.164447   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.665089   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.581797   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:05.581879   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.082565   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.582872   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.628756   60008 api_server.go:72] duration metric: took 1.046965637s to wait for apiserver process to appear ...
	I0319 20:36:06.628786   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:06.628808   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:06.629340   60008 api_server.go:269] stopped: https://192.168.39.77:8444/healthz: Get "https://192.168.39.77:8444/healthz": dial tcp 192.168.39.77:8444: connect: connection refused
	I0319 20:36:07.128890   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.231991   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.232024   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.232039   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.280784   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.280820   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.629356   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.660326   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:09.660434   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.128936   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.139305   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:10.139336   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.629187   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.635922   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:36:10.654111   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:36:10.654137   60008 api_server.go:131] duration metric: took 4.025345365s to wait for apiserver health ...
	I0319 20:36:10.654146   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:10.654154   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:10.656104   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.401478   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:08.402086   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:08.402111   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:08.402001   60806 retry.go:31] will retry after 2.487532232s: waiting for machine to come up
	I0319 20:36:10.891005   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:10.891550   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:10.891591   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:10.891503   60806 retry.go:31] will retry after 3.741447035s: waiting for machine to come up
	I0319 20:36:08.163468   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.165537   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:12.661667   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.657654   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:10.672795   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:10.715527   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:10.728811   60008 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:10.728850   60008 system_pods.go:61] "coredns-76f75df574-hsdk2" [319e5411-97e4-4021-80d0-b39195acb696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:10.728862   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [d10870b0-a0e1-47aa-baf9-07065c1d9142] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:10.728873   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [4925af1b-328f-42ee-b2ef-78b58fcbdd0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:10.728883   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [6dad1c39-3fbc-4364-9ed8-725c0f518191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:10.728889   60008 system_pods.go:61] "kube-proxy-bwj22" [9cc86566-612e-48bc-94c9-a2dad6978c92] Running
	I0319 20:36:10.728896   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [e9c38443-ea8c-4590-94ca-61077f850b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:10.728904   60008 system_pods.go:61] "metrics-server-57f55c9bc5-ddl2q" [ecb174e4-18b0-459e-afb1-137a1f6bdd67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:10.728919   60008 system_pods.go:61] "storage-provisioner" [95fb27b5-769c-4420-8021-3d97942c9f42] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0319 20:36:10.728931   60008 system_pods.go:74] duration metric: took 13.321799ms to wait for pod list to return data ...
	I0319 20:36:10.728944   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:10.743270   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:10.743312   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:10.743326   60008 node_conditions.go:105] duration metric: took 14.37332ms to run NodePressure ...
	I0319 20:36:10.743348   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:11.028786   60008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034096   60008 kubeadm.go:733] kubelet initialised
	I0319 20:36:11.034115   60008 kubeadm.go:734] duration metric: took 5.302543ms waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034122   60008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:11.040118   60008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.046021   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046048   60008 pod_ready.go:81] duration metric: took 5.906752ms for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.046060   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046069   60008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.051677   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051700   60008 pod_ready.go:81] duration metric: took 5.61463ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.051712   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051721   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.057867   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057893   60008 pod_ready.go:81] duration metric: took 6.163114ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.057905   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057912   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:13.065761   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.634526   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:14.635125   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:14.635155   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:14.635074   60806 retry.go:31] will retry after 3.841866145s: waiting for machine to come up
	I0319 20:36:14.662669   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.664913   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:15.565340   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:17.567623   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:19.570775   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.479276   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.479810   59019 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:36:18.479836   59019 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:36:18.479852   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.480232   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.480279   59019 main.go:141] libmachine: (no-preload-414130) DBG | skip adding static IP to network mk-no-preload-414130 - found existing host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"}
	I0319 20:36:18.480297   59019 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:36:18.480319   59019 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:36:18.480336   59019 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:36:18.482725   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483025   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.483052   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483228   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:36:18.483262   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:36:18.483299   59019 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:36:18.483320   59019 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:36:18.483373   59019 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:36:18.612349   59019 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:36:18.612766   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:36:18.613495   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.616106   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616459   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.616498   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616729   59019 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:36:18.616940   59019 machine.go:94] provisionDockerMachine start ...
	I0319 20:36:18.616957   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:18.617150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.619316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619599   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.619620   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619750   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.619895   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620054   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620166   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.620339   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.620508   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.620521   59019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:36:18.729177   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:36:18.729203   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729483   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:36:18.729511   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729728   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.732330   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732633   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.732664   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.732944   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733087   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733211   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.733347   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.733513   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.733528   59019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:36:18.857142   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:36:18.857178   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.860040   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860434   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.860465   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860682   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.860907   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861102   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861283   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.861462   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.861661   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.861685   59019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:36:18.976726   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:36:18.976755   59019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:36:18.976776   59019 buildroot.go:174] setting up certificates
	I0319 20:36:18.976789   59019 provision.go:84] configureAuth start
	I0319 20:36:18.976803   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.977095   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.980523   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.980948   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.980976   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.981150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.983394   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983720   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.983741   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983887   59019 provision.go:143] copyHostCerts
	I0319 20:36:18.983949   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:36:18.983959   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:36:18.984009   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:36:18.984092   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:36:18.984099   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:36:18.984118   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:36:18.984224   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:36:18.984237   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:36:18.984284   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:36:18.984348   59019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:36:19.241365   59019 provision.go:177] copyRemoteCerts
	I0319 20:36:19.241422   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:36:19.241445   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.244060   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244362   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.244388   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244593   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.244781   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.244956   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.245125   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.332749   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:36:19.360026   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:36:19.386680   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:36:19.414673   59019 provision.go:87] duration metric: took 437.87318ms to configureAuth
	I0319 20:36:19.414697   59019 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:36:19.414893   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:36:19.414964   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.417627   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.417949   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.417974   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.418139   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.418351   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418513   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418687   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.418854   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.419099   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.419120   59019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:36:19.712503   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:36:19.712538   59019 machine.go:97] duration metric: took 1.095583423s to provisionDockerMachine
	I0319 20:36:19.712554   59019 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:36:19.712573   59019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:36:19.712595   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.712918   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:36:19.712953   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.715455   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715779   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.715813   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715917   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.716098   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.716307   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.716455   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.801402   59019 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:36:19.806156   59019 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:36:19.806181   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:36:19.806253   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:36:19.806330   59019 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:36:19.806451   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:36:19.818601   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:19.845698   59019 start.go:296] duration metric: took 133.131789ms for postStartSetup
	I0319 20:36:19.845728   59019 fix.go:56] duration metric: took 23.395944884s for fixHost
	I0319 20:36:19.845746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.848343   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848727   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.848760   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848909   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.849090   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849256   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849452   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.849667   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.849843   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.849853   59019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:36:19.957555   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880579.901731357
	
	I0319 20:36:19.957574   59019 fix.go:216] guest clock: 1710880579.901731357
	I0319 20:36:19.957581   59019 fix.go:229] Guest: 2024-03-19 20:36:19.901731357 +0000 UTC Remote: 2024-03-19 20:36:19.845732308 +0000 UTC m=+363.236094224 (delta=55.999049ms)
	I0319 20:36:19.957612   59019 fix.go:200] guest clock delta is within tolerance: 55.999049ms
	I0319 20:36:19.957625   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 23.507874645s
	I0319 20:36:19.957656   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.957889   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:19.960613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.960930   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.960957   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.961108   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961627   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961804   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961883   59019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:36:19.961930   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.961996   59019 ssh_runner.go:195] Run: cat /version.json
	I0319 20:36:19.962022   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.964593   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.964790   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965034   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965057   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965250   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965368   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965397   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965416   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965529   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965611   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965677   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965764   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.965788   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965893   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:20.041410   59019 ssh_runner.go:195] Run: systemctl --version
	I0319 20:36:20.067540   59019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:36:20.214890   59019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:36:20.222680   59019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:36:20.222735   59019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:36:20.239981   59019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:36:20.240003   59019 start.go:494] detecting cgroup driver to use...
	I0319 20:36:20.240066   59019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:36:20.260435   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:36:20.277338   59019 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:36:20.277398   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:36:20.294069   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:36:20.309777   59019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:36:20.443260   59019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:36:20.595476   59019 docker.go:233] disabling docker service ...
	I0319 20:36:20.595552   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:36:20.612622   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:36:20.627717   59019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:36:20.790423   59019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:36:20.915434   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:36:20.932043   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:36:20.953955   59019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:36:20.954026   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.966160   59019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:36:20.966230   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.978217   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.990380   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.002669   59019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:36:21.014880   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.026125   59019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.045239   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.056611   59019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:36:21.067763   59019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:36:21.067818   59019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:36:21.084054   59019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:36:21.095014   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:21.237360   59019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:36:21.396979   59019 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:36:21.397047   59019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:36:21.402456   59019 start.go:562] Will wait 60s for crictl version
	I0319 20:36:21.402509   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.406963   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:36:21.446255   59019 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:36:21.446351   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.477273   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.519196   59019 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:36:21.520520   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:21.523401   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.523792   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:21.523822   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.524033   59019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:36:21.528973   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:21.543033   59019 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:36:21.543154   59019 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:36:21.543185   59019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:21.583439   59019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:36:21.583472   59019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:36:21.583515   59019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.583551   59019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.583566   59019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.583610   59019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.583622   59019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.583646   59019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.583731   59019 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:36:21.583766   59019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585216   59019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585225   59019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.585236   59019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.585210   59019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.585247   59019 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:36:21.585253   59019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.585285   59019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.585297   59019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:19.163241   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:21.165282   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:22.071931   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:24.567506   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.567537   60008 pod_ready.go:81] duration metric: took 13.509614974s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.567553   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573414   60008 pod_ready.go:92] pod "kube-proxy-bwj22" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.573444   60008 pod_ready.go:81] duration metric: took 5.881434ms for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573457   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580429   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.580452   60008 pod_ready.go:81] duration metric: took 6.984808ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580463   60008 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.722682   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.727610   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:36:21.738933   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.740326   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.772871   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.801213   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.829968   59019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:36:21.830008   59019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.830053   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.832291   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.945513   59019 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:36:21.945558   59019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.945612   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945618   59019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:36:21.945651   59019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.945663   59019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:36:21.945687   59019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.945695   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945721   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970009   59019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:36:21.970052   59019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.970079   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.970090   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970100   59019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:36:21.970125   59019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.970149   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970177   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:22.062153   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:36:22.062260   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.063754   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.063840   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.091003   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091052   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:22.091104   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091335   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:22.091372   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0319 20:36:22.091382   59019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091405   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:36:22.091423   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0319 20:36:22.091426   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091475   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:22.096817   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0319 20:36:22.155139   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.155289   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.190022   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0319 20:36:22.190072   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.190166   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.507872   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445006   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.353551966s)
	I0319 20:36:26.445031   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0319 20:36:26.445049   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445063   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (4.289744726s)
	I0319 20:36:26.445095   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445099   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445107   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (4.254920134s)
	I0319 20:36:26.445135   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445176   59019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.937263856s)
	I0319 20:36:26.445228   59019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:36:26.445254   59019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445296   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:23.665322   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.167485   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.588550   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:29.088665   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.407117   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (1.96198659s)
	I0319 20:36:28.407156   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:36:28.407176   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407171   59019 ssh_runner.go:235] Completed: which crictl: (1.961850083s)
	I0319 20:36:28.407212   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407244   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:30.495567   59019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088296063s)
	I0319 20:36:30.495590   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.088358118s)
	I0319 20:36:30.495606   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:36:30.495617   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:36:30.495633   59019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495686   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495735   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:28.662588   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.163637   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.589581   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:34.090180   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.473194   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977482574s)
	I0319 20:36:32.473238   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:36:32.473263   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:32.473260   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.977498716s)
	I0319 20:36:32.473294   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0319 20:36:32.473311   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:34.927774   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.454440131s)
	I0319 20:36:34.927813   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:36:34.927842   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:34.927888   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:33.664608   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.163358   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.588459   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:38.590173   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.512011   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.584091271s)
	I0319 20:36:37.512048   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0319 20:36:37.512077   59019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:37.512134   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:38.589202   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.077040733s)
	I0319 20:36:38.589231   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0319 20:36:38.589263   59019 cache_images.go:123] Successfully loaded all cached images
	I0319 20:36:38.589278   59019 cache_images.go:92] duration metric: took 17.005785801s to LoadCachedImages
	I0319 20:36:38.589291   59019 kubeadm.go:928] updating node { 192.168.72.29 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:36:38.589415   59019 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-414130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:38.589495   59019 ssh_runner.go:195] Run: crio config
	I0319 20:36:38.648312   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:38.648334   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:38.648346   59019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:38.648366   59019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-414130 NodeName:no-preload-414130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:38.648494   59019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-414130"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:38.648554   59019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:36:38.665850   59019 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:38.665928   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:38.678211   59019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0319 20:36:38.701657   59019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:36:38.721498   59019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0319 20:36:38.741159   59019 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:38.745617   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:38.759668   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:38.896211   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:38.916698   59019 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130 for IP: 192.168.72.29
	I0319 20:36:38.916720   59019 certs.go:194] generating shared ca certs ...
	I0319 20:36:38.916748   59019 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:38.916888   59019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:38.916930   59019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:38.916943   59019 certs.go:256] generating profile certs ...
	I0319 20:36:38.917055   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.key
	I0319 20:36:38.917134   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key.2d7d554c
	I0319 20:36:38.917185   59019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key
	I0319 20:36:38.917324   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:38.917381   59019 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:38.917396   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:38.917434   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:38.917469   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:38.917501   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:38.917553   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:38.918130   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:38.959630   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:39.007656   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:39.046666   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:39.078901   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:36:39.116600   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:36:39.158517   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:39.188494   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:39.218770   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:39.247341   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:39.275816   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:39.303434   59019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:39.326445   59019 ssh_runner.go:195] Run: openssl version
	I0319 20:36:39.333373   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:39.346280   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352619   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352686   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.359796   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:39.372480   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:39.384231   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389760   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389818   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.396639   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:39.408887   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:39.421847   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427779   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427848   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.434447   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:39.446945   59019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:39.452219   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:39.458729   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:39.465298   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:39.471931   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:39.478810   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:39.485551   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:39.492084   59019 kubeadm.go:391] StartCluster: {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:39.492210   59019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:39.492297   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.535094   59019 cri.go:89] found id: ""
	I0319 20:36:39.535157   59019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:39.549099   59019 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:39.549123   59019 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:39.549129   59019 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:39.549179   59019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:39.560565   59019 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:39.561570   59019 kubeconfig.go:125] found "no-preload-414130" server: "https://192.168.72.29:8443"
	I0319 20:36:39.563750   59019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:39.578708   59019 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.29
	I0319 20:36:39.578746   59019 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:39.578756   59019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:39.578799   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.620091   59019 cri.go:89] found id: ""
	I0319 20:36:39.620152   59019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:39.639542   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:39.652115   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:39.652133   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:39.652190   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:36:39.664047   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:39.664114   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:39.675218   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:36:39.685482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:39.685533   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:39.695803   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.705482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:39.705538   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.715747   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:36:39.725260   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:39.725324   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:39.735246   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:39.745069   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:39.862945   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.548185   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.794369   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.891458   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.992790   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:40.992871   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.493489   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.164706   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:40.662753   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:42.663084   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.087924   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:43.087987   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.993208   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.040237   59019 api_server.go:72] duration metric: took 1.047447953s to wait for apiserver process to appear ...
	I0319 20:36:42.040278   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:42.040323   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:42.040927   59019 api_server.go:269] stopped: https://192.168.72.29:8443/healthz: Get "https://192.168.72.29:8443/healthz": dial tcp 192.168.72.29:8443: connect: connection refused
	I0319 20:36:42.541457   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.853765   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:44.853796   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:44.853834   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.967607   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:44.967648   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.040791   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.049359   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.049400   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.541024   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.545880   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.545907   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.041423   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.046075   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.046101   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.541147   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.546547   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.546587   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:44.664041   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.163545   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.040899   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.046413   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.046453   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:47.541051   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.547309   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.547334   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.040856   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.046293   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:48.046318   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.540858   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.545311   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:36:48.551941   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:36:48.551962   59019 api_server.go:131] duration metric: took 6.511678507s to wait for apiserver health ...
	I0319 20:36:48.551970   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:48.551976   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:48.553824   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:45.588011   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.589644   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:50.088130   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:48.555094   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:48.566904   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:48.592246   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:48.603249   59019 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:48.603277   59019 system_pods.go:61] "coredns-7db6d8ff4d-t42ph" [bc831304-6e17-452d-8059-22bb46bad525] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:48.603284   59019 system_pods.go:61] "etcd-no-preload-414130" [e2ac0f77-fade-4ac6-a472-58df4040a57d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:48.603294   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [1128c23f-0cc6-4cd4-aeed-32f3d4570e2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:48.603300   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [b03747b6-c3ed-44cf-bcc8-dc2cea408100] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:48.603304   59019 system_pods.go:61] "kube-proxy-dttkh" [23ac1cd6-588b-4745-9c0b-740f9f0e684c] Running
	I0319 20:36:48.603313   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [99fde84c-78d6-4c57-8889-c0d9f3b55a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:48.603318   59019 system_pods.go:61] "metrics-server-569cc877fc-jvlnl" [318246fd-b809-40fa-8aff-78eb33ea10fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:48.603322   59019 system_pods.go:61] "storage-provisioner" [80470118-b092-4ba1-b830-d6f13173434d] Running
	I0319 20:36:48.603327   59019 system_pods.go:74] duration metric: took 11.054488ms to wait for pod list to return data ...
	I0319 20:36:48.603336   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:48.606647   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:48.606667   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:48.606678   59019 node_conditions.go:105] duration metric: took 3.33741ms to run NodePressure ...
	I0319 20:36:48.606693   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:48.888146   59019 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898053   59019 kubeadm.go:733] kubelet initialised
	I0319 20:36:48.898073   59019 kubeadm.go:734] duration metric: took 9.903203ms waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898082   59019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:48.911305   59019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:50.918568   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:49.664061   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.162467   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.588174   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:55.088783   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:52.920852   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:54.419386   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.419410   59019 pod_ready.go:81] duration metric: took 5.508083852s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.419420   59019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926059   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.926081   59019 pod_ready.go:81] duration metric: took 506.65554ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926090   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930519   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.930538   59019 pod_ready.go:81] duration metric: took 4.441479ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930546   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.436969   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.436991   59019 pod_ready.go:81] duration metric: took 506.439126ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.437002   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443096   59019 pod_ready.go:92] pod "kube-proxy-dttkh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.443120   59019 pod_ready.go:81] duration metric: took 6.110267ms for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443132   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465091   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:56.465114   59019 pod_ready.go:81] duration metric: took 1.021974956s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465123   59019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.163556   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.663128   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:57.589188   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.093044   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:58.471804   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.478113   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:58.663274   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.663336   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.663834   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.588018   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.087994   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:02.973736   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.471180   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.161734   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.161995   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.091895   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.588324   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:07.971754   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.974080   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.162947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:11.661800   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.088585   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.588430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:12.475641   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.971849   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:13.662232   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:15.663770   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.588577   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.590602   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.972091   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.972471   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.473526   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.161717   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:20.166272   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:22.661804   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.088608   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:23.587236   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:23.975755   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.472094   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:24.663592   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.664475   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:25.589800   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.087868   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.088398   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:28.472705   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.972037   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:29.162647   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.662382   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:32.088569   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.587686   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:32.972676   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:35.471024   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.161665   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.169093   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.587810   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.590891   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:37.472396   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:39.472831   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.662719   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:40.663337   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:41.088396   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.089545   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.972560   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:44.471857   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.162059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.163324   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:47.662016   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.588501   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:48.087736   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:50.091413   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:46.972679   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.473552   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.665655   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.164565   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.587562   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.587929   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:51.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.471542   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.472616   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.663037   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.666932   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.588855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.087276   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:58.971607   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.474225   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.165581   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.169140   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.087715   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.092449   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.972924   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.473336   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.665054   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.666413   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.588698   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.087857   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.088761   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:08.475109   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.973956   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.162101   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.663715   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:12.588671   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.088453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:13.473479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.971583   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:13.162747   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.163005   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.662157   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.587779   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:19.588889   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:18.473455   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.473644   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.162290   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:22.167423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.589226   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.090617   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:22.473728   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.971753   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.663722   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.665702   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.589574   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.088379   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:38:26.972361   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.471757   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.473307   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:28.666420   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.162701   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.089336   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.587759   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.473519   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:35.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.665536   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.161727   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.087816   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.587570   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:38.471727   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.472995   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.162339   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.162741   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:42.661979   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.588023   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.088381   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:45.091312   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:42.474517   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.971377   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.662325   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.663603   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:47.587861   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:50.086555   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.975287   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.475667   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.164849   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:51.661947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.087983   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.588099   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:51.971311   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:53.971907   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:55.972358   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.162991   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.163316   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.588120   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:59.090000   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:57.973293   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.471215   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:58.662516   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.663859   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.588049   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.589283   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.471981   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:04.472495   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.164344   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:05.665651   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:06.086880   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.088337   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:06.971135   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.971957   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.473482   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.162486   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.662096   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:12.662841   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.587271   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:13.087563   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:15.088454   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:13.972462   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.471598   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:14.664028   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.665749   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.587968   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.087138   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.471693   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.473198   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:19.162423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.164242   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:22.087921   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.089243   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:22.971141   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.971943   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:23.663805   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.162590   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.587708   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.588720   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.972042   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.972160   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:30.973402   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.664060   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.161446   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.092087   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.588849   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.473205   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.971076   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.162170   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.164007   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:37.662820   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.588912   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:38.086910   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:40.089385   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:37.971651   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.973467   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.663277   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.162392   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.587250   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.589855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.472493   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.973064   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.162740   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:45.162232   59415 pod_ready.go:81] duration metric: took 4m0.006756965s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:39:45.162255   59415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:39:45.162262   59415 pod_ready.go:38] duration metric: took 4m8.418792568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:39:45.162277   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:39:45.162309   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.162363   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.219659   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:45.219685   59415 cri.go:89] found id: ""
	I0319 20:39:45.219694   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:45.219737   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.225012   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.225072   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.268783   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.268803   59415 cri.go:89] found id: ""
	I0319 20:39:45.268810   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:45.268875   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.273758   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.273813   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:45.316870   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.316893   59415 cri.go:89] found id: ""
	I0319 20:39:45.316901   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:45.316942   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.321910   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:45.321968   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:45.360077   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:45.360098   59415 cri.go:89] found id: ""
	I0319 20:39:45.360105   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:45.360157   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.365517   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:45.365580   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:45.407686   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:45.407704   59415 cri.go:89] found id: ""
	I0319 20:39:45.407711   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:45.407752   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.412894   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:45.412954   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:45.451930   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:45.451953   59415 cri.go:89] found id: ""
	I0319 20:39:45.451964   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:45.452009   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.456634   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:45.456699   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:45.498575   59415 cri.go:89] found id: ""
	I0319 20:39:45.498601   59415 logs.go:276] 0 containers: []
	W0319 20:39:45.498611   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:45.498619   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:45.498678   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:45.548381   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.548400   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.548405   59415 cri.go:89] found id: ""
	I0319 20:39:45.548411   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:45.548469   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.553470   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.558445   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:45.558471   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.603464   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:45.603490   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.650631   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:45.650663   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:45.668744   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:45.668775   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:45.823596   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:45.823625   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.891879   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:45.891911   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.944237   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:45.944284   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:46.005819   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:46.005848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:46.069819   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.069848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.648008   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.648051   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:46.701035   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.701073   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.753159   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:46.753189   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:46.804730   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:46.804767   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:47.087453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.088165   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:47.471171   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.472374   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:49.351186   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.368780   59415 api_server.go:72] duration metric: took 4m19.832131165s to wait for apiserver process to appear ...
	I0319 20:39:49.368806   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:39:49.368844   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:49.368913   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:49.408912   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:49.408937   59415 cri.go:89] found id: ""
	I0319 20:39:49.408947   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:49.409010   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.414194   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:49.414263   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:49.456271   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.456298   59415 cri.go:89] found id: ""
	I0319 20:39:49.456307   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:49.456374   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.461250   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:49.461316   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:49.510029   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:49.510052   59415 cri.go:89] found id: ""
	I0319 20:39:49.510061   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:49.510119   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.515604   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:49.515667   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:49.561004   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:49.561026   59415 cri.go:89] found id: ""
	I0319 20:39:49.561034   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:49.561100   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.566205   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:49.566276   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:49.610666   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:49.610685   59415 cri.go:89] found id: ""
	I0319 20:39:49.610693   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:49.610735   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.615683   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:49.615730   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:49.657632   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:49.657648   59415 cri.go:89] found id: ""
	I0319 20:39:49.657655   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:49.657711   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.662128   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:49.662172   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:49.699037   59415 cri.go:89] found id: ""
	I0319 20:39:49.699060   59415 logs.go:276] 0 containers: []
	W0319 20:39:49.699068   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:49.699074   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:49.699131   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:49.754331   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:49.754353   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:49.754359   59415 cri.go:89] found id: ""
	I0319 20:39:49.754368   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:49.754437   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.759210   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.763797   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:49.763816   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.818285   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:49.818314   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:49.946232   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:49.946266   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.994160   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:49.994186   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:50.042893   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:50.042923   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:50.099333   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:50.099362   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:50.547046   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:50.547082   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:50.593081   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:50.593111   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:50.632611   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:50.632643   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:50.689610   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:50.689641   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:50.707961   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:50.707997   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:50.752684   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:50.752713   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:50.790114   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:50.790139   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:51.089647   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.588183   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:39:51.972170   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:54.471260   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:56.472093   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.338254   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:39:53.343669   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:39:53.344796   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:39:53.344816   59415 api_server.go:131] duration metric: took 3.976004163s to wait for apiserver health ...
	I0319 20:39:53.344824   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:39:53.344854   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:53.344896   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:53.407914   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.407939   59415 cri.go:89] found id: ""
	I0319 20:39:53.407948   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:53.408000   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.414299   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:53.414360   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:53.466923   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:53.466944   59415 cri.go:89] found id: ""
	I0319 20:39:53.466953   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:53.467006   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.472181   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:53.472247   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:53.511808   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:53.511830   59415 cri.go:89] found id: ""
	I0319 20:39:53.511839   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:53.511900   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.517386   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:53.517445   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:53.560360   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:53.560383   59415 cri.go:89] found id: ""
	I0319 20:39:53.560390   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:53.560433   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.565131   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:53.565181   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:53.611243   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:53.611264   59415 cri.go:89] found id: ""
	I0319 20:39:53.611273   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:53.611326   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.616327   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:53.616391   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:53.656775   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:53.656794   59415 cri.go:89] found id: ""
	I0319 20:39:53.656801   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:53.656846   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.661915   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:53.661966   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:53.700363   59415 cri.go:89] found id: ""
	I0319 20:39:53.700389   59415 logs.go:276] 0 containers: []
	W0319 20:39:53.700396   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:53.700401   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:53.700454   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:53.750337   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:53.750357   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:53.750360   59415 cri.go:89] found id: ""
	I0319 20:39:53.750373   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:53.750426   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.755835   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.761078   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:53.761099   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:53.812898   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:53.812928   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:53.934451   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:53.934482   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.989117   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:53.989148   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:54.386028   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:54.386060   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:54.437864   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:54.437893   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:54.456559   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:54.456584   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:54.506564   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:54.506593   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:54.551120   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:54.551151   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:54.595768   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:54.595794   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:54.637715   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:54.637745   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:54.689666   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:54.689706   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:54.731821   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:54.731851   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:57.287839   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:39:57.287866   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.287870   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.287874   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.287878   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.287881   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.287884   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.287890   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.287894   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.287901   59415 system_pods.go:74] duration metric: took 3.943071923s to wait for pod list to return data ...
	I0319 20:39:57.287907   59415 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:39:57.290568   59415 default_sa.go:45] found service account: "default"
	I0319 20:39:57.290587   59415 default_sa.go:55] duration metric: took 2.674741ms for default service account to be created ...
	I0319 20:39:57.290594   59415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:39:57.296691   59415 system_pods.go:86] 8 kube-system pods found
	I0319 20:39:57.296710   59415 system_pods.go:89] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.296718   59415 system_pods.go:89] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.296722   59415 system_pods.go:89] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.296726   59415 system_pods.go:89] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.296730   59415 system_pods.go:89] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.296734   59415 system_pods.go:89] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.296741   59415 system_pods.go:89] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.296747   59415 system_pods.go:89] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.296753   59415 system_pods.go:126] duration metric: took 6.154905ms to wait for k8s-apps to be running ...
	I0319 20:39:57.296762   59415 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:39:57.296803   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:57.313729   59415 system_svc.go:56] duration metric: took 16.960151ms WaitForService to wait for kubelet
	I0319 20:39:57.313753   59415 kubeadm.go:576] duration metric: took 4m27.777105553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:39:57.313777   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:39:57.316765   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:39:57.316789   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:39:57.316803   59415 node_conditions.go:105] duration metric: took 3.021397ms to run NodePressure ...
	I0319 20:39:57.316813   59415 start.go:240] waiting for startup goroutines ...
	I0319 20:39:57.316820   59415 start.go:245] waiting for cluster config update ...
	I0319 20:39:57.316830   59415 start.go:254] writing updated cluster config ...
	I0319 20:39:57.317087   59415 ssh_runner.go:195] Run: rm -f paused
	I0319 20:39:57.365814   59415 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:39:57.368111   59415 out.go:177] * Done! kubectl is now configured to use "embed-certs-421660" cluster and "default" namespace by default
	I0319 20:39:56.088199   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.088480   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.091027   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.971917   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.972329   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:02.589430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.088313   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:03.474330   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.972928   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:07.587315   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:09.588829   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:08.471254   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:10.472963   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.087905   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:14.589786   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.973661   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:15.471559   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.087489   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.087559   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.473159   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.975538   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:21.090446   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:23.588215   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.581466   60008 pod_ready.go:81] duration metric: took 4m0.000988658s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:24.581495   60008 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:40:24.581512   60008 pod_ready.go:38] duration metric: took 4m13.547382951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:24.581535   60008 kubeadm.go:591] duration metric: took 4m20.894503953s to restartPrimaryControlPlane
	W0319 20:40:24.581583   60008 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:24.581611   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:22.472853   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.972183   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:26.973460   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:28.974127   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:31.475479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:33.973020   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:36.471909   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:38.473008   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:40.975638   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:43.473149   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:45.474566   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.972615   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:50.472593   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:52.973302   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:55.472067   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:56.465422   59019 pod_ready.go:81] duration metric: took 4m0.000285496s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:56.465453   59019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0319 20:40:56.465495   59019 pod_ready.go:38] duration metric: took 4m7.567400515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:56.465521   59019 kubeadm.go:591] duration metric: took 4m16.916387223s to restartPrimaryControlPlane
	W0319 20:40:56.465574   59019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:56.465604   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:56.963018   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.381377433s)
	I0319 20:40:56.963106   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:40:56.982252   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:40:56.994310   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:40:57.004950   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:40:57.004974   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:40:57.005018   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:40:57.015009   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:40:57.015070   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:40:57.026153   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:40:57.036560   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:40:57.036611   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:40:57.047469   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.060137   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:40:57.060188   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.073305   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:40:57.083299   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:40:57.083372   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:40:57.093788   60008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:40:57.352358   60008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:05.910387   60008 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:41:05.910460   60008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:05.910542   60008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:05.910660   60008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:05.910798   60008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:05.910903   60008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:05.912366   60008 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:05.912439   60008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:05.912493   60008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:05.912563   60008 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:05.912614   60008 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:05.912673   60008 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:05.912726   60008 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:05.912809   60008 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:05.912874   60008 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:05.912975   60008 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:05.913082   60008 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:05.913142   60008 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:05.913197   60008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:05.913258   60008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:05.913363   60008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:05.913439   60008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:05.913536   60008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:05.913616   60008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:05.913738   60008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:05.913841   60008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:05.915394   60008 out.go:204]   - Booting up control plane ...
	I0319 20:41:05.915486   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:05.915589   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:05.915682   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:05.915832   60008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:05.915951   60008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:05.916010   60008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:05.916154   60008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:05.916255   60008 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505433 seconds
	I0319 20:41:05.916392   60008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:05.916545   60008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:05.916628   60008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:05.916839   60008 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-385240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:05.916908   60008 kubeadm.go:309] [bootstrap-token] Using token: y9pq78.ls188thm3dr5dool
	I0319 20:41:05.918444   60008 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:05.918567   60008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:05.918654   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:05.918821   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:05.918999   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:05.919147   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:05.919260   60008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:05.919429   60008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:05.919498   60008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:05.919572   60008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:05.919582   60008 kubeadm.go:309] 
	I0319 20:41:05.919665   60008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:05.919678   60008 kubeadm.go:309] 
	I0319 20:41:05.919787   60008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:05.919799   60008 kubeadm.go:309] 
	I0319 20:41:05.919834   60008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:05.919929   60008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:05.920007   60008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:05.920017   60008 kubeadm.go:309] 
	I0319 20:41:05.920102   60008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:05.920112   60008 kubeadm.go:309] 
	I0319 20:41:05.920182   60008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:05.920191   60008 kubeadm.go:309] 
	I0319 20:41:05.920284   60008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:05.920411   60008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:05.920506   60008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:05.920520   60008 kubeadm.go:309] 
	I0319 20:41:05.920648   60008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:05.920762   60008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:05.920771   60008 kubeadm.go:309] 
	I0319 20:41:05.920901   60008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921063   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:05.921099   60008 kubeadm.go:309] 	--control-plane 
	I0319 20:41:05.921105   60008 kubeadm.go:309] 
	I0319 20:41:05.921207   60008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:05.921216   60008 kubeadm.go:309] 
	I0319 20:41:05.921285   60008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921386   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:05.921397   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:41:05.921403   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:05.922921   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:05.924221   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:05.941888   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:06.040294   60008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:06.040378   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.040413   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-385240 minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=default-k8s-diff-port-385240 minikube.k8s.io/primary=true
	I0319 20:41:06.104038   60008 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:06.266168   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.766345   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.266622   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.766418   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.266864   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.766777   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.266420   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.766319   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:10.266990   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:10.766714   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.266839   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.767222   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.266933   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.766390   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.266562   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.766618   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.267159   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.767010   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.266307   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.767002   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.266488   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.766567   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.266789   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.766935   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.266312   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.767202   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.904766   60008 kubeadm.go:1107] duration metric: took 12.864451937s to wait for elevateKubeSystemPrivileges
	W0319 20:41:18.904802   60008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:18.904810   60008 kubeadm.go:393] duration metric: took 5m15.275720912s to StartCluster
	I0319 20:41:18.904826   60008 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.904910   60008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:18.906545   60008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.906817   60008 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:18.908538   60008 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:18.906944   60008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:18.907019   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:41:18.910084   60008 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:18.910100   60008 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910125   60008 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:18.910135   60008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385240"
	W0319 20:41:18.910141   60008 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:18.910255   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910127   60008 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.910313   60008 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:18.910334   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910603   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910635   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910647   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910667   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910692   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910671   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.927094   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0319 20:41:18.927240   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0319 20:41:18.927517   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.927620   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928036   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928059   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928074   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0319 20:41:18.928331   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928360   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928492   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928538   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928737   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928993   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.929009   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.929046   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.929066   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929108   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.929338   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.929862   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929893   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.932815   60008 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.932838   60008 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:18.932865   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.933211   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.933241   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.945888   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0319 20:41:18.946351   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.946842   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.946869   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.947426   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.947600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.947808   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0319 20:41:18.948220   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.948367   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0319 20:41:18.948739   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.948753   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.949222   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.949277   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.951252   60008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:18.949736   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.950173   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.951720   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.952838   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.952813   60008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:18.952917   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:18.952934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.952815   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.953264   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.953460   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.955228   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.957199   60008 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:18.958698   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:18.958715   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:18.958733   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.956502   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.957073   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.958806   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.958845   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.959306   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.959485   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.959783   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.961410   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961775   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.961802   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961893   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.962065   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.962213   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.962369   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.975560   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0319 20:41:18.976026   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.976503   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.976524   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.976893   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.977128   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.978582   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.978862   60008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:18.978881   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:18.978898   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.981356   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981730   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.981762   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.982056   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.982192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.982337   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:19.126985   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:19.188792   60008 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198961   60008 node_ready.go:49] node "default-k8s-diff-port-385240" has status "Ready":"True"
	I0319 20:41:19.198981   60008 node_ready.go:38] duration metric: took 10.160382ms for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198992   60008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:19.209346   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:19.335212   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:19.414291   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:19.506570   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:19.506590   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:19.651892   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:19.651916   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:19.808237   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:19.808282   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:19.924353   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:20.583635   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169310347s)
	I0319 20:41:20.583700   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.583717   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.583981   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.583991   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.584015   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.584027   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.584253   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.584282   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585518   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250274289s)
	I0319 20:41:20.585568   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585584   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.585855   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.585879   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.585888   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585902   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585916   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.586162   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.586168   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.586177   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.609166   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.609183   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.609453   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.609492   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.609502   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.750409   60008 pod_ready.go:92] pod "coredns-76f75df574-4rq6h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:20.750433   60008 pod_ready.go:81] duration metric: took 1.541065393s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.750442   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.869692   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.869719   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.869995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.870000   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870025   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870045   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.870057   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.870336   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870352   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870366   60008 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:20.872093   60008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:41:20.873465   60008 addons.go:505] duration metric: took 1.966520277s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:41:21.260509   60008 pod_ready.go:92] pod "coredns-76f75df574-swxdt" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.260533   60008 pod_ready.go:81] duration metric: took 510.083899ms for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.260543   60008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268298   60008 pod_ready.go:92] pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.268324   60008 pod_ready.go:81] duration metric: took 7.772878ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268335   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274436   60008 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.274461   60008 pod_ready.go:81] duration metric: took 6.117464ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274472   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281324   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.281347   60008 pod_ready.go:81] duration metric: took 6.866088ms for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281367   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.593980   60008 pod_ready.go:92] pod "kube-proxy-j7ghm" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.594001   60008 pod_ready.go:81] duration metric: took 312.62702ms for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.594009   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993321   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.993346   60008 pod_ready.go:81] duration metric: took 399.330556ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993362   60008 pod_ready.go:38] duration metric: took 2.794359581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:21.993375   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:21.993423   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:22.010583   60008 api_server.go:72] duration metric: took 3.10372573s to wait for apiserver process to appear ...
	I0319 20:41:22.010609   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:22.010629   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:41:22.015218   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:41:22.016276   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:41:22.016291   60008 api_server.go:131] duration metric: took 5.6763ms to wait for apiserver health ...
	I0319 20:41:22.016298   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:22.197418   60008 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:22.197454   60008 system_pods.go:61] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.197460   60008 system_pods.go:61] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.197465   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.197470   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.197476   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.197481   60008 system_pods.go:61] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.197485   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.197494   60008 system_pods.go:61] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.197500   60008 system_pods.go:61] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.197514   60008 system_pods.go:74] duration metric: took 181.210964ms to wait for pod list to return data ...
	I0319 20:41:22.197526   60008 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:22.392702   60008 default_sa.go:45] found service account: "default"
	I0319 20:41:22.392738   60008 default_sa.go:55] duration metric: took 195.195704ms for default service account to be created ...
	I0319 20:41:22.392751   60008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:22.595946   60008 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:22.595975   60008 system_pods.go:89] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.595980   60008 system_pods.go:89] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.595985   60008 system_pods.go:89] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.595991   60008 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.595996   60008 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.596006   60008 system_pods.go:89] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.596010   60008 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.596016   60008 system_pods.go:89] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.596022   60008 system_pods.go:89] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.596034   60008 system_pods.go:126] duration metric: took 203.277741ms to wait for k8s-apps to be running ...
	I0319 20:41:22.596043   60008 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:22.596087   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:22.615372   60008 system_svc.go:56] duration metric: took 19.319488ms WaitForService to wait for kubelet
	I0319 20:41:22.615396   60008 kubeadm.go:576] duration metric: took 3.708546167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:22.615413   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:22.793277   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:22.793303   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:22.793313   60008 node_conditions.go:105] duration metric: took 177.89499ms to run NodePressure ...
	I0319 20:41:22.793325   60008 start.go:240] waiting for startup goroutines ...
	I0319 20:41:22.793331   60008 start.go:245] waiting for cluster config update ...
	I0319 20:41:22.793342   60008 start.go:254] writing updated cluster config ...
	I0319 20:41:22.793598   60008 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:22.845339   60008 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:41:22.847429   60008 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-385240" cluster and "default" namespace by default
	I0319 20:41:29.064044   59019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.598411816s)
	I0319 20:41:29.064115   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:29.082924   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:41:29.095050   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:29.106905   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:29.106918   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:29.106962   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:29.118153   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:29.118209   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:29.128632   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:29.140341   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:29.140401   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:29.151723   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.162305   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:29.162365   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.173654   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:29.185155   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:29.185211   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:29.196015   59019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:29.260934   59019 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0319 20:41:29.261054   59019 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:29.412424   59019 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:29.412592   59019 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:29.412759   59019 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:29.636019   59019 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:29.638046   59019 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:29.638158   59019 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:29.638216   59019 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:29.638279   59019 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:29.638331   59019 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:29.645456   59019 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:29.645553   59019 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:29.645610   59019 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:29.645663   59019 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:29.645725   59019 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:29.645788   59019 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:29.645822   59019 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:29.645869   59019 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:29.895850   59019 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:30.248635   59019 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:30.380474   59019 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:30.457908   59019 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:30.585194   59019 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:30.585852   59019 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:30.588394   59019 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:30.590147   59019 out.go:204]   - Booting up control plane ...
	I0319 20:41:30.590241   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:30.590353   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:30.590606   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:30.611645   59019 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:30.614010   59019 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:30.614266   59019 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:30.757838   59019 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 20:41:30.757973   59019 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0319 20:41:31.758717   59019 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001332477s
	I0319 20:41:31.758819   59019 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 20:41:37.261282   59019 kubeadm.go:309] [api-check] The API server is healthy after 5.50238s
	I0319 20:41:37.275017   59019 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:37.299605   59019 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:37.335190   59019 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:37.335449   59019 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-414130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:37.350882   59019 kubeadm.go:309] [bootstrap-token] Using token: 0euy3c.pb7fih13u47u7k5a
	I0319 20:41:37.352692   59019 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:37.352796   59019 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:37.357551   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:37.365951   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:37.369544   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:37.376066   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:37.379284   59019 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:37.669667   59019 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:38.120423   59019 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:38.668937   59019 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:38.670130   59019 kubeadm.go:309] 
	I0319 20:41:38.670236   59019 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:38.670251   59019 kubeadm.go:309] 
	I0319 20:41:38.670339   59019 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:38.670348   59019 kubeadm.go:309] 
	I0319 20:41:38.670369   59019 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:38.670451   59019 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:38.670520   59019 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:38.670530   59019 kubeadm.go:309] 
	I0319 20:41:38.670641   59019 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:38.670653   59019 kubeadm.go:309] 
	I0319 20:41:38.670720   59019 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:38.670731   59019 kubeadm.go:309] 
	I0319 20:41:38.670802   59019 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:38.670916   59019 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:38.671036   59019 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:38.671053   59019 kubeadm.go:309] 
	I0319 20:41:38.671185   59019 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:38.671332   59019 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:38.671351   59019 kubeadm.go:309] 
	I0319 20:41:38.671438   59019 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671588   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:38.671609   59019 kubeadm.go:309] 	--control-plane 
	I0319 20:41:38.671613   59019 kubeadm.go:309] 
	I0319 20:41:38.671684   59019 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:38.671693   59019 kubeadm.go:309] 
	I0319 20:41:38.671758   59019 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671877   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:38.672172   59019 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:38.672197   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:41:38.672212   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:38.674158   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:38.675618   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:38.690458   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:38.712520   59019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:38.712597   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:38.712616   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-414130 minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=no-preload-414130 minikube.k8s.io/primary=true
	I0319 20:41:38.902263   59019 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:38.902364   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.403054   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.903127   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.402786   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.903358   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.403414   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.902829   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.402506   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.903338   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.402784   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.902477   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.403152   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.903190   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.402544   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.903397   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:46.402785   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:46.902656   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.402845   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.903436   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.402511   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.903073   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.402559   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.902914   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.402708   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.903441   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.403416   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.585670   59019 kubeadm.go:1107] duration metric: took 12.873132825s to wait for elevateKubeSystemPrivileges
	W0319 20:41:51.585714   59019 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:51.585724   59019 kubeadm.go:393] duration metric: took 5m12.093644869s to StartCluster
	I0319 20:41:51.585744   59019 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.585835   59019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:51.588306   59019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.588634   59019 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:51.590331   59019 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:51.588755   59019 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:51.588891   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:41:51.590430   59019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-414130"
	I0319 20:41:51.591988   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:51.592020   59019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-414130"
	W0319 20:41:51.592038   59019 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:51.592069   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.590437   59019 addons.go:69] Setting default-storageclass=true in profile "no-preload-414130"
	I0319 20:41:51.590441   59019 addons.go:69] Setting metrics-server=true in profile "no-preload-414130"
	I0319 20:41:51.592098   59019 addons.go:234] Setting addon metrics-server=true in "no-preload-414130"
	W0319 20:41:51.592114   59019 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:51.592129   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.592164   59019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-414130"
	I0319 20:41:51.592450   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592479   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592505   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592532   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.608909   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0319 20:41:51.609383   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.609942   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.609962   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.610565   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.610774   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.612725   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0319 20:41:51.612794   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0319 20:41:51.613141   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.613637   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.613660   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.614121   59019 addons.go:234] Setting addon default-storageclass=true in "no-preload-414130"
	W0319 20:41:51.614139   59019 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:51.614167   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.614214   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.614482   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614512   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614774   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614810   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614876   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.615336   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.615369   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.615703   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.616237   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.616281   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.630175   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0319 20:41:51.630802   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.631279   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.631296   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.631645   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.632322   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.632356   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.634429   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0319 20:41:51.634865   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.635311   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.635324   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.635922   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.636075   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.637997   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.640025   59019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:51.641428   59019 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:51.641445   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:51.641462   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.644316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644838   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.644853   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644875   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0319 20:41:51.645162   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.645300   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.645365   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.645499   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.645613   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.645964   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.645976   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.646447   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.646663   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.648174   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.649872   59019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:51.651152   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:51.651177   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:51.651197   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.654111   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654523   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.654545   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654792   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.654987   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.655156   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.655281   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.656648   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0319 20:41:51.656960   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.657457   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.657471   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.657751   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.657948   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.659265   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.659503   59019 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:51.659517   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:51.659533   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.662039   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662427   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.662447   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662583   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.662757   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.662879   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.662991   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.845584   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:51.876597   59019 node_ready.go:35] waiting up to 6m0s for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886290   59019 node_ready.go:49] node "no-preload-414130" has status "Ready":"True"
	I0319 20:41:51.886308   59019 node_ready.go:38] duration metric: took 9.684309ms for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886315   59019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:51.893456   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:51.976850   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:52.031123   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:52.031144   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:52.133184   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:52.195945   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:52.195968   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:52.270721   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.270745   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:52.407604   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.578113   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578140   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578511   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578524   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.578532   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.578557   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578566   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578809   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578828   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.610849   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.610873   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.611246   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.611251   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.611269   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.342742   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.209525982s)
	I0319 20:41:53.342797   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.342808   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343131   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343159   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343163   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.343174   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.343194   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343486   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343503   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343525   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.450430   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.450458   59019 pod_ready.go:81] duration metric: took 1.556981953s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.450478   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459425   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.459454   59019 pod_ready.go:81] duration metric: took 8.967211ms for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459467   59019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495144   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.495164   59019 pod_ready.go:81] duration metric: took 35.690498ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495173   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520382   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.520412   59019 pod_ready.go:81] duration metric: took 25.23062ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520426   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530859   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.530889   59019 pod_ready.go:81] duration metric: took 10.451233ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530903   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.545946   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.13830463s)
	I0319 20:41:53.545994   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546009   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546304   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546323   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546333   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546350   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546678   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546695   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546706   59019 addons.go:470] Verifying addon metrics-server=true in "no-preload-414130"
	I0319 20:41:53.546764   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.548523   59019 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0319 20:41:53.549990   59019 addons.go:505] duration metric: took 1.961237309s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0319 20:41:53.881082   59019 pod_ready.go:92] pod "kube-proxy-m7m4h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.881107   59019 pod_ready.go:81] duration metric: took 350.197776ms for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.881116   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283891   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:54.283924   59019 pod_ready.go:81] duration metric: took 402.800741ms for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283936   59019 pod_ready.go:38] duration metric: took 2.397611991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:54.283953   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:54.284016   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:54.304606   59019 api_server.go:72] duration metric: took 2.715931012s to wait for apiserver process to appear ...
	I0319 20:41:54.304629   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:54.304651   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:41:54.309292   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:41:54.310195   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:41:54.310215   59019 api_server.go:131] duration metric: took 5.579162ms to wait for apiserver health ...
	I0319 20:41:54.310225   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:54.488441   59019 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:54.488475   59019 system_pods.go:61] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.488482   59019 system_pods.go:61] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.488487   59019 system_pods.go:61] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.488491   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.488496   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.488500   59019 system_pods.go:61] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.488505   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.488514   59019 system_pods.go:61] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.488520   59019 system_pods.go:61] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.488530   59019 system_pods.go:74] duration metric: took 178.298577ms to wait for pod list to return data ...
	I0319 20:41:54.488543   59019 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:54.679537   59019 default_sa.go:45] found service account: "default"
	I0319 20:41:54.679560   59019 default_sa.go:55] duration metric: took 191.010696ms for default service account to be created ...
	I0319 20:41:54.679569   59019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:54.884163   59019 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:54.884197   59019 system_pods.go:89] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.884205   59019 system_pods.go:89] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.884211   59019 system_pods.go:89] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.884217   59019 system_pods.go:89] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.884223   59019 system_pods.go:89] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.884230   59019 system_pods.go:89] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.884236   59019 system_pods.go:89] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.884246   59019 system_pods.go:89] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.884268   59019 system_pods.go:89] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.884281   59019 system_pods.go:126] duration metric: took 204.70598ms to wait for k8s-apps to be running ...
	I0319 20:41:54.884294   59019 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:54.884348   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:54.901838   59019 system_svc.go:56] duration metric: took 17.536645ms WaitForService to wait for kubelet
	I0319 20:41:54.901869   59019 kubeadm.go:576] duration metric: took 3.313198534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:54.901887   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:55.080463   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:55.080485   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:55.080495   59019 node_conditions.go:105] duration metric: took 178.603035ms to run NodePressure ...
	I0319 20:41:55.080507   59019 start.go:240] waiting for startup goroutines ...
	I0319 20:41:55.080513   59019 start.go:245] waiting for cluster config update ...
	I0319 20:41:55.080523   59019 start.go:254] writing updated cluster config ...
	I0319 20:41:55.080753   59019 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:55.130477   59019 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0319 20:41:55.133906   59019 out.go:177] * Done! kubectl is now configured to use "no-preload-414130" cluster and "default" namespace by default
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 
	
	
	==> CRI-O <==
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.435283832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881339435260083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b36b64a-a7f7-4a1c-a19f-d57180acf6bc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.435990440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3fa0d9c-5da4-408a-851d-db2332e419e8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.436041481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3fa0d9c-5da4-408a-851d-db2332e419e8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.436323132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3fa0d9c-5da4-408a-851d-db2332e419e8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.481137681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fe80ce9-bca5-4d0c-9c0d-4953cacc1e62 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.481328435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fe80ce9-bca5-4d0c-9c0d-4953cacc1e62 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.482130750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=057ede39-0420-4d51-9dcc-6f400befaf11 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.482741078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881339482713967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=057ede39-0420-4d51-9dcc-6f400befaf11 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.483639873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d185699b-e8db-40af-80cc-1b6a45f44044 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.483703236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d185699b-e8db-40af-80cc-1b6a45f44044 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.483891531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d185699b-e8db-40af-80cc-1b6a45f44044 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.526483208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a439ef54-ac6b-477f-ab50-5f204853837b name=/runtime.v1.RuntimeService/Version
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.526554236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a439ef54-ac6b-477f-ab50-5f204853837b name=/runtime.v1.RuntimeService/Version
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.528611640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0da8ef1c-89c1-4c27-b2a4-dc21aaaee4ca name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.528982641Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881339528961517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0da8ef1c-89c1-4c27-b2a4-dc21aaaee4ca name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.529655640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd72f4d0-1566-4d75-9ddf-bde7bd7be307 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.529708370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd72f4d0-1566-4d75-9ddf-bde7bd7be307 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.529916283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd72f4d0-1566-4d75-9ddf-bde7bd7be307 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.568010345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6737a614-46ea-49c2-80bf-f77941deee23 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.568081955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6737a614-46ea-49c2-80bf-f77941deee23 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.569416587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11898944-9468-41ce-95f4-a1aa28dc4625 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.570010916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881339569985339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11898944-9468-41ce-95f4-a1aa28dc4625 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.571046306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3532095-89f3-469d-8595-2d983bf80d7a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.571100543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3532095-89f3-469d-8595-2d983bf80d7a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:48:59 embed-certs-421660 crio[695]: time="2024-03-19 20:48:59.571387915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3532095-89f3-469d-8595-2d983bf80d7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	54948b2ac3f01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   0fa6a0f32c877       storage-provisioner
	62044eee90d6f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d2d82268fa0f0       busybox
	2b137c65a3111       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   27180cf91e8e1       coredns-76f75df574-9tdfg
	b8bd4bb1ef229       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago      Running             kube-proxy                1                   25d2326204517       kube-proxy-qvn26
	7cf3f6946847f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   0fa6a0f32c877       storage-provisioner
	33f6eb05f3ff8       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      13 minutes ago      Running             kube-controller-manager   1                   5087175ca6e54       kube-controller-manager-embed-certs-421660
	f6f6bbd4f740d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago      Running             kube-scheduler            1                   7a1a317e12f3c       kube-scheduler-embed-certs-421660
	e2f9da9940d12       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      13 minutes ago      Running             kube-apiserver            1                   854ff60b1dcd9       kube-apiserver-embed-certs-421660
	c2391bc9672e3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   bb1c287d8f386       etcd-embed-certs-421660
	
	
	==> coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39975 - 28664 "HINFO IN 9037374147638026213.6766847950462541327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019097871s
	
	
	==> describe nodes <==
	Name:               embed-certs-421660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-421660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=embed-certs-421660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_27_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:27:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-421660
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:48:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:46:07 +0000   Tue, 19 Mar 2024 20:27:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:46:07 +0000   Tue, 19 Mar 2024 20:27:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:46:07 +0000   Tue, 19 Mar 2024 20:27:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:46:07 +0000   Tue, 19 Mar 2024 20:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.108
	  Hostname:    embed-certs-421660
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c70488eeac7540f4ac95b35d7265089b
	  System UUID:                c70488ee-ac75-40f4-ac95-b35d7265089b
	  Boot ID:                    05cd69a6-52f3-4411-97cf-09c07d0b0ca4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-76f75df574-9tdfg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-421660                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-421660             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-421660    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-qvn26                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-421660             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-xbh7v               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-421660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-421660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-421660 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-421660 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-421660 event: Registered Node embed-certs-421660 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-421660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-421660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-421660 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-421660 event: Registered Node embed-certs-421660 in Controller
	
	
	==> dmesg <==
	[Mar19 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052182] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042648] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar19 20:35] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.397824] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.650836] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.245942] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.057412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068419] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.184218] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.157233] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.349010] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +4.967584] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.060068] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.983193] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +4.602032] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.507758] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +3.235567] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.107711] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] <==
	{"level":"info","ts":"2024-03-19T20:35:24.975597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became candidate at term 3"}
	{"level":"info","ts":"2024-03-19T20:35:24.975622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgVoteResp from d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-03-19T20:35:24.975677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became leader at term 3"}
	{"level":"info","ts":"2024-03-19T20:35:24.975703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d5ce96a8bfe0f5c1 elected leader d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-03-19T20:35:24.980092Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d5ce96a8bfe0f5c1","local-member-attributes":"{Name:embed-certs-421660 ClientURLs:[https://192.168.50.108:2379]}","request-path":"/0/members/d5ce96a8bfe0f5c1/attributes","cluster-id":"38e677d7bff02ecf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:35:24.980349Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:35:24.982236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.108:2379"}
	{"level":"info","ts":"2024-03-19T20:35:24.982747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:35:24.984309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-19T20:35:24.984391Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:35:24.991231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:35:44.866246Z","caller":"traceutil/trace.go:171","msg":"trace[207752971] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"357.686113ms","start":"2024-03-19T20:35:44.508453Z","end":"2024-03-19T20:35:44.866139Z","steps":["trace[207752971] 'process raft request'  (duration: 354.454635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:35:44.86689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:35:44.508438Z","time spent":"358.018244ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":791,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:590 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:734 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-03-19T20:35:44.870299Z","caller":"traceutil/trace.go:171","msg":"trace[1882221399] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:647; }","duration":"128.229867ms","start":"2024-03-19T20:35:44.741841Z","end":"2024-03-19T20:35:44.870071Z","steps":["trace[1882221399] 'read index received'  (duration: 121.077587ms)","trace[1882221399] 'applied index is now lower than readState.Index'  (duration: 7.150844ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:35:44.871073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.221606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-76f75df574-9tdfg\" ","response":"range_response_count:1 size:4737"}
	{"level":"info","ts":"2024-03-19T20:35:44.871269Z","caller":"traceutil/trace.go:171","msg":"trace[1645590547] range","detail":"{range_begin:/registry/pods/kube-system/coredns-76f75df574-9tdfg; range_end:; response_count:1; response_revision:605; }","duration":"129.44581ms","start":"2024-03-19T20:35:44.741809Z","end":"2024-03-19T20:35:44.871255Z","steps":["trace[1645590547] 'agreement among raft nodes before linearized reading'  (duration: 128.782028ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:35:44.871607Z","caller":"traceutil/trace.go:171","msg":"trace[1805808659] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"361.672179ms","start":"2024-03-19T20:35:44.509921Z","end":"2024-03-19T20:35:44.871593Z","steps":["trace[1805808659] 'process raft request'  (duration: 359.4794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:35:44.871722Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:35:44.509907Z","time spent":"361.765396ms","remote":"127.0.0.1:55566","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1310,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-8q6xv\" mod_revision:595 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-8q6xv\" value_size:1251 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-8q6xv\" > >"}
	{"level":"info","ts":"2024-03-19T20:35:44.87202Z","caller":"traceutil/trace.go:171","msg":"trace[1481147434] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"360.1033ms","start":"2024-03-19T20:35:44.511906Z","end":"2024-03-19T20:35:44.872009Z","steps":["trace[1481147434] 'process raft request'  (duration: 357.894982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:35:44.872762Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:35:44.511897Z","time spent":"360.758455ms","remote":"127.0.0.1:55782","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-76f75df574\" mod_revision:596 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" > >"}
	{"level":"info","ts":"2024-03-19T20:36:28.020629Z","caller":"traceutil/trace.go:171","msg":"trace[1562562091] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"156.033295ms","start":"2024-03-19T20:36:27.864564Z","end":"2024-03-19T20:36:28.020598Z","steps":["trace[1562562091] 'process raft request'  (duration: 121.280066ms)","trace[1562562091] 'compare'  (duration: 34.543468ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T20:36:29.872293Z","caller":"traceutil/trace.go:171","msg":"trace[1428623987] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"190.34256ms","start":"2024-03-19T20:36:29.68193Z","end":"2024-03-19T20:36:29.872273Z","steps":["trace[1428623987] 'process raft request'  (duration: 190.142198ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:45:25.028517Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-03-19T20:45:25.039708Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":849,"took":"10.549319ms","hash":4051401027,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2539520,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-03-19T20:45:25.0398Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4051401027,"revision":849,"compact-revision":-1}
	
	
	==> kernel <==
	 20:48:59 up 14 min,  0 users,  load average: 0.31, 0.35, 0.25
	Linux embed-certs-421660 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] <==
	I0319 20:43:27.501984       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:45:26.500431       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:45:26.500832       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0319 20:45:27.501376       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:45:27.501438       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:45:27.501447       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:45:27.501572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:45:27.501684       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:45:27.502952       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:46:27.501588       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:27.501838       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:46:27.501866       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:46:27.503091       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:27.503247       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:46:27.503280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:48:27.501971       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:48:27.502377       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:48:27.502463       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:48:27.503630       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:48:27.503758       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:48:27.503834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] <==
	I0319 20:43:09.421281       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:43:38.959096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:43:39.429428       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:44:08.965025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:44:09.439101       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:44:38.975597       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:44:39.446618       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:45:08.980243       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:45:09.454390       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:45:38.986094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:45:39.462519       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:46:08.991674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:46:09.472138       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:46:38.997881       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:46:39.480863       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:46:45.820047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="307.592µs"
	I0319 20:46:57.825382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="154.444µs"
	E0319 20:47:09.002413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:47:09.488788       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:47:39.009102       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:47:39.497749       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:48:09.015284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:48:09.506301       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:48:39.021474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:48:39.515322       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] <==
	I0319 20:35:27.364663       1 server_others.go:72] "Using iptables proxy"
	I0319 20:35:27.374604       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.108"]
	I0319 20:35:27.445322       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:35:27.445369       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:35:27.445393       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:35:27.453461       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:35:27.453700       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:35:27.453737       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:35:27.458273       1 config.go:188] "Starting service config controller"
	I0319 20:35:27.459251       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:35:27.458479       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:35:27.459313       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:35:27.458872       1 config.go:315] "Starting node config controller"
	I0319 20:35:27.459322       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:35:27.560053       1 shared_informer.go:318] Caches are synced for node config
	I0319 20:35:27.560104       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:35:27.560126       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] <==
	I0319 20:35:24.781422       1 serving.go:380] Generated self-signed cert in-memory
	W0319 20:35:26.454605       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0319 20:35:26.454811       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:35:26.454848       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0319 20:35:26.454929       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0319 20:35:26.517400       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 20:35:26.517448       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:35:26.519757       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 20:35:26.519912       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 20:35:26.519926       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:35:26.519965       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 20:35:26.621136       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:46:33 embed-certs-421660 kubelet[907]: E0319 20:46:33.821843     907 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 19 20:46:33 embed-certs-421660 kubelet[907]: E0319 20:46:33.822492     907 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 19 20:46:33 embed-certs-421660 kubelet[907]: E0319 20:46:33.823247     907 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xhdp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-xbh7v_kube-system(7cb1baf4-fcb9-4126-9437-45fc6228821f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 19 20:46:33 embed-certs-421660 kubelet[907]: E0319 20:46:33.823517     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:46:45 embed-certs-421660 kubelet[907]: E0319 20:46:45.803368     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:46:57 embed-certs-421660 kubelet[907]: E0319 20:46:57.803358     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:47:09 embed-certs-421660 kubelet[907]: E0319 20:47:09.802272     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:47:22 embed-certs-421660 kubelet[907]: E0319 20:47:22.827493     907 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:47:22 embed-certs-421660 kubelet[907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:47:22 embed-certs-421660 kubelet[907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:47:22 embed-certs-421660 kubelet[907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:47:22 embed-certs-421660 kubelet[907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:47:23 embed-certs-421660 kubelet[907]: E0319 20:47:23.803798     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:47:36 embed-certs-421660 kubelet[907]: E0319 20:47:36.802712     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:47:50 embed-certs-421660 kubelet[907]: E0319 20:47:50.802791     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:48:04 embed-certs-421660 kubelet[907]: E0319 20:48:04.802936     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:48:19 embed-certs-421660 kubelet[907]: E0319 20:48:19.802950     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:48:22 embed-certs-421660 kubelet[907]: E0319 20:48:22.826267     907 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:48:22 embed-certs-421660 kubelet[907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:48:22 embed-certs-421660 kubelet[907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:48:22 embed-certs-421660 kubelet[907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:48:22 embed-certs-421660 kubelet[907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:48:34 embed-certs-421660 kubelet[907]: E0319 20:48:34.809661     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:48:45 embed-certs-421660 kubelet[907]: E0319 20:48:45.801976     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:48:57 embed-certs-421660 kubelet[907]: E0319 20:48:57.802516     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	
	
	==> storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] <==
	I0319 20:35:58.170082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:35:58.184685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:35:58.184879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:36:15.592837       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:36:15.593832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-421660_be25b29b-6025-419b-80ed-f7d6f26cfd68!
	I0319 20:36:15.593543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad8d11a3-ac43-4481-ba89-bd8da41d2da8", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-421660_be25b29b-6025-419b-80ed-f7d6f26cfd68 became leader
	I0319 20:36:15.695128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-421660_be25b29b-6025-419b-80ed-f7d6f26cfd68!
	
	
	==> storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] <==
	I0319 20:35:27.285310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0319 20:35:57.288947       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-421660 -n embed-certs-421660
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-421660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xbh7v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-421660 describe pod metrics-server-57f55c9bc5-xbh7v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-421660 describe pod metrics-server-57f55c9bc5-xbh7v: exit status 1 (63.815139ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xbh7v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-421660 describe pod metrics-server-57f55c9bc5-xbh7v: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-19 20:50:23.433048344 +0000 UTC m=+6347.553668117
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-385240 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-385240 logs -n 25: (2.288878482s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:33:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:33:00.489344   60008 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:33:00.489594   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489603   60008 out.go:304] Setting ErrFile to fd 2...
	I0319 20:33:00.489607   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489787   60008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:33:00.490297   60008 out.go:298] Setting JSON to false
	I0319 20:33:00.491188   60008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8078,"bootTime":1710872302,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:33:00.491245   60008 start.go:139] virtualization: kvm guest
	I0319 20:33:00.493588   60008 out.go:177] * [default-k8s-diff-port-385240] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:33:00.495329   60008 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:33:00.496506   60008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:33:00.495369   60008 notify.go:220] Checking for updates...
	I0319 20:33:00.499210   60008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:33:00.500494   60008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:33:00.501820   60008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:33:00.503200   60008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:33:00.504837   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:33:00.505191   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.505266   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.519674   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0319 20:33:00.520123   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.520634   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.520656   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.520945   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.521132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.521364   60008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:33:00.521629   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.521660   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.535764   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0319 20:33:00.536105   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.536564   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.536583   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.536890   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.537079   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.572160   60008 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:33:00.573517   60008 start.go:297] selected driver: kvm2
	I0319 20:33:00.573530   60008 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.573663   60008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:33:00.574335   60008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.574423   60008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:33:00.588908   60008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:33:00.589283   60008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:33:00.589354   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:33:00.589375   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:33:00.589419   60008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.589532   60008 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.591715   60008 out.go:177] * Starting "default-k8s-diff-port-385240" primary control-plane node in "default-k8s-diff-port-385240" cluster
	I0319 20:32:58.292485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:01.364553   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:00.593043   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:33:00.593084   60008 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:33:00.593094   60008 cache.go:56] Caching tarball of preloaded images
	I0319 20:33:00.593156   60008 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:33:00.593166   60008 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:33:00.593281   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:33:00.593454   60008 start.go:360] acquireMachinesLock for default-k8s-diff-port-385240: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:33:07.444550   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:10.516480   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:16.596485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:19.668501   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:25.748504   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:28.820525   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:34.900508   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:37.972545   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:44.052478   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:47.124492   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:53.204484   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:56.276536   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:02.356552   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:05.428529   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:11.508540   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:14.580485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:20.660521   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:23.732555   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:29.812516   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:32.884574   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:38.964472   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:42.036583   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:48.116547   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:51.188507   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:54.193037   59415 start.go:364] duration metric: took 3m51.108134555s to acquireMachinesLock for "embed-certs-421660"
	I0319 20:34:54.193108   59415 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:34:54.193120   59415 fix.go:54] fixHost starting: 
	I0319 20:34:54.193458   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:34:54.193487   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:34:54.208614   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0319 20:34:54.209078   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:34:54.209506   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:34:54.209527   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:34:54.209828   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:34:54.209992   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:34:54.210117   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:34:54.211626   59415 fix.go:112] recreateIfNeeded on embed-certs-421660: state=Stopped err=<nil>
	I0319 20:34:54.211661   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	W0319 20:34:54.211820   59415 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:34:54.213989   59415 out.go:177] * Restarting existing kvm2 VM for "embed-certs-421660" ...
	I0319 20:34:54.190431   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:34:54.190483   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.190783   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:34:54.190809   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.191021   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:34:54.192901   59019 machine.go:97] duration metric: took 4m37.398288189s to provisionDockerMachine
	I0319 20:34:54.192939   59019 fix.go:56] duration metric: took 4m37.41948201s for fixHost
	I0319 20:34:54.192947   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 4m37.419503815s
	W0319 20:34:54.192970   59019 start.go:713] error starting host: provision: host is not running
	W0319 20:34:54.193060   59019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0319 20:34:54.193071   59019 start.go:728] Will try again in 5 seconds ...
	I0319 20:34:54.215391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Start
	I0319 20:34:54.215559   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring networks are active...
	I0319 20:34:54.216249   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network default is active
	I0319 20:34:54.216543   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network mk-embed-certs-421660 is active
	I0319 20:34:54.216902   59415 main.go:141] libmachine: (embed-certs-421660) Getting domain xml...
	I0319 20:34:54.217595   59415 main.go:141] libmachine: (embed-certs-421660) Creating domain...
	I0319 20:34:55.407058   59415 main.go:141] libmachine: (embed-certs-421660) Waiting to get IP...
	I0319 20:34:55.407855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.408280   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.408343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.408247   60323 retry.go:31] will retry after 202.616598ms: waiting for machine to come up
	I0319 20:34:55.612753   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.613313   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.613341   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.613247   60323 retry.go:31] will retry after 338.618778ms: waiting for machine to come up
	I0319 20:34:55.953776   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.954230   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.954259   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.954164   60323 retry.go:31] will retry after 389.19534ms: waiting for machine to come up
	I0319 20:34:56.344417   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.344855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.344886   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.344822   60323 retry.go:31] will retry after 555.697854ms: waiting for machine to come up
	I0319 20:34:56.902547   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.902990   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.903017   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.902955   60323 retry.go:31] will retry after 702.649265ms: waiting for machine to come up
	I0319 20:34:57.606823   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:57.607444   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:57.607484   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:57.607388   60323 retry.go:31] will retry after 814.886313ms: waiting for machine to come up
	I0319 20:34:59.194634   59019 start.go:360] acquireMachinesLock for no-preload-414130: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:34:58.424559   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:58.425066   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:58.425088   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:58.425011   60323 retry.go:31] will retry after 948.372294ms: waiting for machine to come up
	I0319 20:34:59.375490   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:59.375857   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:59.375884   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:59.375809   60323 retry.go:31] will retry after 1.206453994s: waiting for machine to come up
	I0319 20:35:00.584114   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:00.584548   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:00.584572   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:00.584496   60323 retry.go:31] will retry after 1.200177378s: waiting for machine to come up
	I0319 20:35:01.786803   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:01.787139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:01.787167   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:01.787085   60323 retry.go:31] will retry after 1.440671488s: waiting for machine to come up
	I0319 20:35:03.229775   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:03.230179   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:03.230216   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:03.230146   60323 retry.go:31] will retry after 2.073090528s: waiting for machine to come up
	I0319 20:35:05.305427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:05.305904   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:05.305930   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:05.305859   60323 retry.go:31] will retry after 3.463824423s: waiting for machine to come up
	I0319 20:35:08.773517   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:08.773911   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:08.773938   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:08.773873   60323 retry.go:31] will retry after 4.159170265s: waiting for machine to come up
	I0319 20:35:12.937475   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937965   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has current primary IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937979   59415 main.go:141] libmachine: (embed-certs-421660) Found IP for machine: 192.168.50.108
	I0319 20:35:12.937987   59415 main.go:141] libmachine: (embed-certs-421660) Reserving static IP address...
	I0319 20:35:12.938372   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.938400   59415 main.go:141] libmachine: (embed-certs-421660) DBG | skip adding static IP to network mk-embed-certs-421660 - found existing host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"}
	I0319 20:35:12.938412   59415 main.go:141] libmachine: (embed-certs-421660) Reserved static IP address: 192.168.50.108
	I0319 20:35:12.938435   59415 main.go:141] libmachine: (embed-certs-421660) Waiting for SSH to be available...
	I0319 20:35:12.938448   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Getting to WaitForSSH function...
	I0319 20:35:12.940523   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.940897   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.940932   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.941037   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH client type: external
	I0319 20:35:12.941069   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa (-rw-------)
	I0319 20:35:12.941102   59415 main.go:141] libmachine: (embed-certs-421660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:12.941116   59415 main.go:141] libmachine: (embed-certs-421660) DBG | About to run SSH command:
	I0319 20:35:12.941128   59415 main.go:141] libmachine: (embed-certs-421660) DBG | exit 0
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:13.068386   59415 main.go:141] libmachine: (embed-certs-421660) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:13.068756   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetConfigRaw
	I0319 20:35:13.069421   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.071751   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072101   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.072133   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072393   59415 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:35:13.072557   59415 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:13.072574   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:13.072781   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.075005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.075369   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.075678   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075816   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075973   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.076134   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.076364   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.076382   59415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:13.188983   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:13.189017   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189291   59415 buildroot.go:166] provisioning hostname "embed-certs-421660"
	I0319 20:35:13.189319   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189503   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.191881   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192190   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.192210   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192389   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.192550   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192696   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192818   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.192989   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.193145   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.193159   59415 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-421660 && echo "embed-certs-421660" | sudo tee /etc/hostname
	I0319 20:35:13.326497   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-421660
	
	I0319 20:35:13.326524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.329344   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.329765   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.330179   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330372   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330547   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.330753   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.330928   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.330943   59415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-421660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-421660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-421660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:13.454265   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:13.454297   59415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:13.454320   59415 buildroot.go:174] setting up certificates
	I0319 20:35:13.454334   59415 provision.go:84] configureAuth start
	I0319 20:35:13.454348   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.454634   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.457258   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457692   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.457723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457834   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.460123   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460436   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.460463   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460587   59415 provision.go:143] copyHostCerts
	I0319 20:35:13.460643   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:13.460652   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:13.460719   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:13.460815   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:13.460822   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:13.460846   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:13.460917   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:13.460924   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:13.460945   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:13.461004   59415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.embed-certs-421660 san=[127.0.0.1 192.168.50.108 embed-certs-421660 localhost minikube]
	I0319 20:35:13.553348   59415 provision.go:177] copyRemoteCerts
	I0319 20:35:13.553399   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:13.553424   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.555729   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556036   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.556071   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556199   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.556406   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.556579   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.556725   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:13.642780   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:35:13.670965   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:13.698335   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:13.724999   59415 provision.go:87] duration metric: took 270.652965ms to configureAuth
	I0319 20:35:13.725022   59415 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:13.725174   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:13.725235   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.727653   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.727969   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.727988   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.728186   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.728410   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728581   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728783   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.728960   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.729113   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.729130   59415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:14.012527   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:14.012554   59415 machine.go:97] duration metric: took 939.982813ms to provisionDockerMachine
	I0319 20:35:14.012568   59415 start.go:293] postStartSetup for "embed-certs-421660" (driver="kvm2")
	I0319 20:35:14.012582   59415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:14.012616   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.012969   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:14.012996   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.015345   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015706   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.015759   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015864   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.016069   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.016269   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.016409   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.105236   59415 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:14.110334   59415 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:14.110363   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:14.110435   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:14.110534   59415 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:14.110623   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:14.120911   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:14.148171   59415 start.go:296] duration metric: took 135.590484ms for postStartSetup
	I0319 20:35:14.148209   59415 fix.go:56] duration metric: took 19.955089617s for fixHost
	I0319 20:35:14.148234   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.150788   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.151165   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151331   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.151514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151667   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151784   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.151953   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:14.152125   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:14.152138   59415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:14.265435   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880514.234420354
	
	I0319 20:35:14.265467   59415 fix.go:216] guest clock: 1710880514.234420354
	I0319 20:35:14.265478   59415 fix.go:229] Guest: 2024-03-19 20:35:14.234420354 +0000 UTC Remote: 2024-03-19 20:35:14.148214105 +0000 UTC m=+251.208119911 (delta=86.206249ms)
	I0319 20:35:14.265507   59415 fix.go:200] guest clock delta is within tolerance: 86.206249ms
	I0319 20:35:14.265516   59415 start.go:83] releasing machines lock for "embed-certs-421660", held for 20.072435424s
	I0319 20:35:14.265554   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.265868   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:14.268494   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268846   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.268874   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269589   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269751   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269833   59415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:14.269884   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.269956   59415 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:14.269972   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.272604   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272771   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272978   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273137   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273140   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273160   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273316   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273337   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273473   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273685   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.273738   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.358033   59415 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:14.385511   59415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:14.542052   59415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:14.549672   59415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:14.549747   59415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:14.569110   59415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:14.569137   59415 start.go:494] detecting cgroup driver to use...
	I0319 20:35:14.569193   59415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:14.586644   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:14.601337   59415 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:14.601407   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:14.616158   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:14.631754   59415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:14.746576   59415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:14.902292   59415 docker.go:233] disabling docker service ...
	I0319 20:35:14.902353   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:14.920787   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:14.938865   59415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:15.078791   59415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:15.214640   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:15.242992   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:15.264698   59415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:15.264755   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.276750   59415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:15.276817   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.288643   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.300368   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.318906   59415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:15.338660   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.351908   59415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.372022   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.384124   59415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:15.395206   59415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:15.395268   59415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:15.411193   59415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:15.422031   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:15.572313   59415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:15.730316   59415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:15.730389   59415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:15.738539   59415 start.go:562] Will wait 60s for crictl version
	I0319 20:35:15.738600   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:35:15.743107   59415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:15.788582   59415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:15.788666   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.819444   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.859201   59415 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:15.860492   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:15.863655   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864032   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:15.864063   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864303   59415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:15.870600   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:15.885694   59415 kubeadm.go:877] updating cluster {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:15.885833   59415 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:15.885890   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:15.924661   59415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:15.924736   59415 ssh_runner.go:195] Run: which lz4
	I0319 20:35:15.929595   59415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:15.934980   59415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:15.935014   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:17.673355   59415 crio.go:462] duration metric: took 1.743798593s to copy over tarball
	I0319 20:35:17.673428   59415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:20.169596   59415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49612841s)
	I0319 20:35:20.169629   59415 crio.go:469] duration metric: took 2.496240167s to extract the tarball
	I0319 20:35:20.169639   59415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:20.208860   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:20.261040   59415 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:35:20.261063   59415 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:35:20.261071   59415 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.29.3 crio true true} ...
	I0319 20:35:20.261162   59415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-421660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:20.261227   59415 ssh_runner.go:195] Run: crio config
	I0319 20:35:20.311322   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:20.311346   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:20.311359   59415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:20.311377   59415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-421660 NodeName:embed-certs-421660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:35:20.311501   59415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-421660"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:20.311560   59415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:35:20.323700   59415 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:20.323776   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:20.334311   59415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0319 20:35:20.352833   59415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:20.372914   59415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0319 20:35:20.391467   59415 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:20.395758   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:20.408698   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:20.532169   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:20.550297   59415 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660 for IP: 192.168.50.108
	I0319 20:35:20.550320   59415 certs.go:194] generating shared ca certs ...
	I0319 20:35:20.550339   59415 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:20.550507   59415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:20.550574   59415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:20.550586   59415 certs.go:256] generating profile certs ...
	I0319 20:35:20.550700   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/client.key
	I0319 20:35:20.550774   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key.e5ca10b2
	I0319 20:35:20.550824   59415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key
	I0319 20:35:20.550954   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:20.550988   59415 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:20.551001   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:20.551037   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:20.551070   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:20.551101   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:20.551155   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:20.552017   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:20.583444   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:20.616935   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:20.673499   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:20.707988   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0319 20:35:20.734672   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:35:20.761302   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:20.792511   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:20.819903   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:20.848361   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:20.878230   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:20.908691   59415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:20.930507   59415 ssh_runner.go:195] Run: openssl version
	I0319 20:35:20.937088   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:20.949229   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954299   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954343   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.960610   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:20.972162   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:20.984137   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989211   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989273   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.995436   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:21.007076   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:21.018552   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024109   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024146   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.030344   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:21.041615   59415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:21.046986   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:21.053533   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:21.060347   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:21.067155   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:21.074006   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:21.080978   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:21.087615   59415 kubeadm.go:391] StartCluster: {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:21.087695   59415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:21.087745   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.131217   59415 cri.go:89] found id: ""
	I0319 20:35:21.131294   59415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:21.143460   59415 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:21.143487   59415 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:21.143493   59415 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:21.143545   59415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:21.156145   59415 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:21.157080   59415 kubeconfig.go:125] found "embed-certs-421660" server: "https://192.168.50.108:8443"
	I0319 20:35:21.158865   59415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:21.171515   59415 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.108
	I0319 20:35:21.171551   59415 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:21.171561   59415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:21.171607   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.221962   59415 cri.go:89] found id: ""
	I0319 20:35:21.222028   59415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:21.239149   59415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:21.250159   59415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:21.250185   59415 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:21.250242   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:21.260035   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:21.260107   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:21.270804   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:21.281041   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:21.281106   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:21.291796   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.301883   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:21.301943   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.313038   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:21.323390   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:21.323462   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:21.333893   59415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:21.344645   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:21.491596   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.349871   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.592803   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.670220   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.802978   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:22.803071   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:23.303983   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.803254   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.846475   59415 api_server.go:72] duration metric: took 1.043496842s to wait for apiserver process to appear ...
	I0319 20:35:23.846509   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:35:23.846532   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:23.847060   59415 api_server.go:269] stopped: https://192.168.50.108:8443/healthz: Get "https://192.168.50.108:8443/healthz": dial tcp 192.168.50.108:8443: connect: connection refused
	I0319 20:35:24.347376   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.456794   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.456826   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.456841   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.492793   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.492827   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.847365   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.857297   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:26.857327   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.346936   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.351748   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:27.351775   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.847430   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.852157   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:35:27.868953   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:35:27.869006   59415 api_server.go:131] duration metric: took 4.022477349s to wait for apiserver health ...
	I0319 20:35:27.869019   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:27.869029   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:27.871083   59415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:35:27.872669   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:35:27.886256   59415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:35:27.912891   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:35:27.928055   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:35:27.928088   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:35:27.928095   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:35:27.928102   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:35:27.928108   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:35:27.928113   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:35:27.928118   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:35:27.928126   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:35:27.928130   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:35:27.928136   59415 system_pods.go:74] duration metric: took 15.221738ms to wait for pod list to return data ...
	I0319 20:35:27.928142   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:35:27.931854   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:35:27.931876   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:35:27.931888   59415 node_conditions.go:105] duration metric: took 3.74189ms to run NodePressure ...
	I0319 20:35:27.931903   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:28.209912   59415 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215315   59415 kubeadm.go:733] kubelet initialised
	I0319 20:35:28.215343   59415 kubeadm.go:734] duration metric: took 5.403708ms waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215353   59415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:28.221636   59415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.230837   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230868   59415 pod_ready.go:81] duration metric: took 9.198177ms for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.230878   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230887   59415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.237452   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237472   59415 pod_ready.go:81] duration metric: took 6.569363ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.237479   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237485   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.242902   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242919   59415 pod_ready.go:81] duration metric: took 5.427924ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.242926   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242931   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.316859   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316889   59415 pod_ready.go:81] duration metric: took 73.950437ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.316901   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316908   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.717107   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717133   59415 pod_ready.go:81] duration metric: took 400.215265ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.717143   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717151   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.117365   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117403   59415 pod_ready.go:81] duration metric: took 400.242952ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.117416   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117427   59415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.517914   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517950   59415 pod_ready.go:81] duration metric: took 400.512217ms for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.517962   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517974   59415 pod_ready.go:38] duration metric: took 1.302609845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:29.518009   59415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:35:29.534665   59415 ops.go:34] apiserver oom_adj: -16
	I0319 20:35:29.534686   59415 kubeadm.go:591] duration metric: took 8.39118752s to restartPrimaryControlPlane
	I0319 20:35:29.534697   59415 kubeadm.go:393] duration metric: took 8.447087595s to StartCluster
	I0319 20:35:29.534713   59415 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.534814   59415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:29.536379   59415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.536620   59415 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:35:29.538397   59415 out.go:177] * Verifying Kubernetes components...
	I0319 20:35:29.536707   59415 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:35:29.536837   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:29.539696   59415 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-421660"
	I0319 20:35:29.539709   59415 addons.go:69] Setting metrics-server=true in profile "embed-certs-421660"
	I0319 20:35:29.539739   59415 addons.go:234] Setting addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:29.539747   59415 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-421660"
	W0319 20:35:29.539751   59415 addons.go:243] addon metrics-server should already be in state true
	W0319 20:35:29.539757   59415 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:35:29.539782   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539786   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539700   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:29.539700   59415 addons.go:69] Setting default-storageclass=true in profile "embed-certs-421660"
	I0319 20:35:29.539882   59415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-421660"
	I0319 20:35:29.540079   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540098   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540107   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540120   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540243   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540282   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.554668   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0319 20:35:29.554742   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0319 20:35:29.554815   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0319 20:35:29.555109   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555148   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555220   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555703   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555708   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555722   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555726   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555828   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555847   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.556077   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556206   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556273   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.556627   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556669   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.556753   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556787   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.559109   59415 addons.go:234] Setting addon default-storageclass=true in "embed-certs-421660"
	W0319 20:35:29.559126   59415 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:35:29.559150   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.559390   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.559425   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.570567   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0319 20:35:29.571010   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.571467   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.571492   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.571831   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.572018   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.573621   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.575889   59415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:35:29.574300   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0319 20:35:29.574529   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0319 20:35:29.577448   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:35:29.577473   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:35:29.577496   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.577913   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.577957   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.578350   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578382   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.578751   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.578877   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578901   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.579318   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.579431   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.579495   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.579509   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.580582   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581050   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.581074   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581166   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.581276   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.583314   59415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:29.581522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.584941   59415 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.584951   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:35:29.584963   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.584980   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.585154   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.587700   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588076   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.588104   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588289   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.588463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.588614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.588791   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.594347   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0319 20:35:29.594626   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.595030   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.595062   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.595384   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.595524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.596984   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.597209   59415 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.597224   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:35:29.597238   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.599955   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.600457   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600533   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.600682   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.600829   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.600926   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.719989   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:29.737348   59415 node_ready.go:35] waiting up to 6m0s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:29.839479   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.839994   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:35:29.840016   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:35:29.852112   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.904335   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:35:29.904358   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:35:29.969646   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:29.969675   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:35:30.031528   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:31.120085   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.280572793s)
	I0319 20:35:31.120135   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120148   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120172   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268019206s)
	I0319 20:35:31.120214   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120229   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120430   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120448   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120457   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120544   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120564   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120588   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120606   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120758   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120788   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120827   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120833   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120841   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.127070   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.127085   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.127287   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.127301   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.138956   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.107385118s)
	I0319 20:35:31.139006   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139027   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139257   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139301   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139319   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139330   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139342   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139546   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139550   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139564   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139579   59415 addons.go:470] Verifying addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:31.141587   59415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:31.142927   59415 addons.go:505] duration metric: took 1.606231661s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:35:31.741584   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.101840   60008 start.go:364] duration metric: took 2m35.508355671s to acquireMachinesLock for "default-k8s-diff-port-385240"
	I0319 20:35:36.101908   60008 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:36.101921   60008 fix.go:54] fixHost starting: 
	I0319 20:35:36.102308   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:36.102352   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:36.118910   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0319 20:35:36.119363   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:36.119926   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:35:36.119957   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:36.120271   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:36.120450   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:36.120614   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:35:36.122085   60008 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385240: state=Stopped err=<nil>
	I0319 20:35:36.122112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	W0319 20:35:36.122284   60008 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:36.124242   60008 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385240" ...
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:33.742461   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.241937   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.743420   59415 node_ready.go:49] node "embed-certs-421660" has status "Ready":"True"
	I0319 20:35:36.743447   59415 node_ready.go:38] duration metric: took 7.006070851s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:36.743458   59415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:36.749810   59415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:36.125778   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Start
	I0319 20:35:36.125974   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring networks are active...
	I0319 20:35:36.126542   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network default is active
	I0319 20:35:36.126934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network mk-default-k8s-diff-port-385240 is active
	I0319 20:35:36.127367   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Getting domain xml...
	I0319 20:35:36.128009   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Creating domain...
	I0319 20:35:37.396589   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting to get IP...
	I0319 20:35:37.397626   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398211   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398294   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.398203   60655 retry.go:31] will retry after 263.730992ms: waiting for machine to come up
	I0319 20:35:37.663811   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664345   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664379   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.664300   60655 retry.go:31] will retry after 308.270868ms: waiting for machine to come up
	I0319 20:35:37.974625   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975061   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975095   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.975027   60655 retry.go:31] will retry after 376.884777ms: waiting for machine to come up
	I0319 20:35:38.353624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354101   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354129   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.354056   60655 retry.go:31] will retry after 419.389718ms: waiting for machine to come up
	I0319 20:35:38.774777   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775271   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775299   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.775224   60655 retry.go:31] will retry after 757.534448ms: waiting for machine to come up
	I0319 20:35:39.534258   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534739   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534766   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:39.534698   60655 retry.go:31] will retry after 921.578914ms: waiting for machine to come up
	I0319 20:35:40.457637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458154   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:40.458092   60655 retry.go:31] will retry after 1.079774724s: waiting for machine to come up
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:38.759504   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.258580   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.539643   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540163   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:41.540113   60655 retry.go:31] will retry after 1.174814283s: waiting for machine to come up
	I0319 20:35:42.716195   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716547   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716576   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:42.716510   60655 retry.go:31] will retry after 1.464439025s: waiting for machine to come up
	I0319 20:35:44.183190   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183673   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183701   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:44.183628   60655 retry.go:31] will retry after 2.304816358s: waiting for machine to come up
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:43.756917   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:44.893540   59415 pod_ready.go:92] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.893576   59415 pod_ready.go:81] duration metric: took 8.143737931s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.893592   59415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903602   59415 pod_ready.go:92] pod "etcd-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.903640   59415 pod_ready.go:81] duration metric: took 10.03087ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903653   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926651   59415 pod_ready.go:92] pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.926682   59415 pod_ready.go:81] duration metric: took 23.020281ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926696   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935080   59415 pod_ready.go:92] pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.935113   59415 pod_ready.go:81] duration metric: took 8.409239ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935126   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947241   59415 pod_ready.go:92] pod "kube-proxy-qvn26" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.947269   59415 pod_ready.go:81] duration metric: took 12.135421ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947280   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155416   59415 pod_ready.go:92] pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:45.155441   59415 pod_ready.go:81] duration metric: took 208.152938ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155460   59415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:47.165059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:46.490600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491092   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491121   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:46.491050   60655 retry.go:31] will retry after 2.347371858s: waiting for machine to come up
	I0319 20:35:48.841516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.841995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.842018   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:48.841956   60655 retry.go:31] will retry after 2.70576525s: waiting for machine to come up
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.662363   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.662389   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.549431   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549931   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549959   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:51.549900   60655 retry.go:31] will retry after 3.429745322s: waiting for machine to come up
	I0319 20:35:54.983382   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.983875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Found IP for machine: 192.168.39.77
	I0319 20:35:54.983908   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserving static IP address...
	I0319 20:35:54.983923   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has current primary IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.984212   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.984240   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserved static IP address: 192.168.39.77
	I0319 20:35:54.984292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | skip adding static IP to network mk-default-k8s-diff-port-385240 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"}
	I0319 20:35:54.984307   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for SSH to be available...
	I0319 20:35:54.984322   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Getting to WaitForSSH function...
	I0319 20:35:54.986280   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986591   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.986624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986722   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH client type: external
	I0319 20:35:54.986752   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa (-rw-------)
	I0319 20:35:54.986783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:54.986796   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | About to run SSH command:
	I0319 20:35:54.986805   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | exit 0
	I0319 20:35:55.112421   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:55.112825   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetConfigRaw
	I0319 20:35:55.113456   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.115976   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116349   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.116377   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116587   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:35:55.116847   60008 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:55.116874   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:55.117099   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.119475   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.119911   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.119947   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.120112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.120312   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120478   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120629   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.120793   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.120970   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.120982   60008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:55.229055   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:55.229090   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229360   60008 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385240"
	I0319 20:35:55.229390   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229594   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.232039   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232371   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.232391   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232574   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.232746   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232866   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232967   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.233087   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.233251   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.233264   60008 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385240 && echo "default-k8s-diff-port-385240" | sudo tee /etc/hostname
	I0319 20:35:55.355708   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385240
	
	I0319 20:35:55.355732   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.358292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358610   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.358641   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358880   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.359105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359267   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359415   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.359545   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.359701   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.359724   60008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:55.479083   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:55.479109   60008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:55.479126   60008 buildroot.go:174] setting up certificates
	I0319 20:35:55.479134   60008 provision.go:84] configureAuth start
	I0319 20:35:55.479143   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.479433   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.482040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482378   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.482408   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482535   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.484637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485035   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.485062   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485212   60008 provision.go:143] copyHostCerts
	I0319 20:35:55.485272   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:55.485283   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:55.485334   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:55.485425   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:55.485434   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:55.485454   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:55.485560   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:55.485569   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:55.485586   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:55.485642   60008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385240 san=[127.0.0.1 192.168.39.77 default-k8s-diff-port-385240 localhost minikube]
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.449710   59019 start.go:364] duration metric: took 57.255031003s to acquireMachinesLock for "no-preload-414130"
	I0319 20:35:56.449774   59019 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:56.449786   59019 fix.go:54] fixHost starting: 
	I0319 20:35:56.450187   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:56.450225   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:56.469771   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0319 20:35:56.470265   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:56.470764   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:35:56.470799   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:56.471187   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:56.471362   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:35:56.471545   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:35:56.473295   59019 fix.go:112] recreateIfNeeded on no-preload-414130: state=Stopped err=<nil>
	I0319 20:35:56.473323   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	W0319 20:35:56.473480   59019 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:56.475296   59019 out.go:177] * Restarting existing kvm2 VM for "no-preload-414130" ...
	I0319 20:35:56.476767   59019 main.go:141] libmachine: (no-preload-414130) Calling .Start
	I0319 20:35:56.476947   59019 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:35:56.477657   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:35:56.478036   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:35:56.478443   59019 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:35:56.479131   59019 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:35:53.663220   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:56.163557   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:55.738705   60008 provision.go:177] copyRemoteCerts
	I0319 20:35:55.738779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:55.738812   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.741292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741618   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.741644   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741835   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.741997   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.742105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.742260   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:55.828017   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:55.854341   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0319 20:35:55.881167   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:55.906768   60008 provision.go:87] duration metric: took 427.621358ms to configureAuth
	I0319 20:35:55.906795   60008 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:55.907007   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:55.907097   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.909518   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.909834   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.909863   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.910008   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.910193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910492   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.910670   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.910835   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.910849   60008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:56.207010   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:56.207036   60008 machine.go:97] duration metric: took 1.090170805s to provisionDockerMachine
	I0319 20:35:56.207049   60008 start.go:293] postStartSetup for "default-k8s-diff-port-385240" (driver="kvm2")
	I0319 20:35:56.207066   60008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:56.207086   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.207410   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:56.207435   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.210075   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210494   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.210526   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210671   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.210828   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.211016   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.211167   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.295687   60008 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:56.300508   60008 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:56.300531   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:56.300601   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:56.300677   60008 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:56.300779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:56.310829   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:56.337456   60008 start.go:296] duration metric: took 130.396402ms for postStartSetup
	I0319 20:35:56.337492   60008 fix.go:56] duration metric: took 20.235571487s for fixHost
	I0319 20:35:56.337516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.339907   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340361   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.340388   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340552   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.340749   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.340888   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.341040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.341198   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:56.341357   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:56.341367   60008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:56.449557   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880556.425761325
	
	I0319 20:35:56.449580   60008 fix.go:216] guest clock: 1710880556.425761325
	I0319 20:35:56.449587   60008 fix.go:229] Guest: 2024-03-19 20:35:56.425761325 +0000 UTC Remote: 2024-03-19 20:35:56.337496936 +0000 UTC m=+175.893119280 (delta=88.264389ms)
	I0319 20:35:56.449619   60008 fix.go:200] guest clock delta is within tolerance: 88.264389ms
	I0319 20:35:56.449624   60008 start.go:83] releasing machines lock for "default-k8s-diff-port-385240", held for 20.347739998s
	I0319 20:35:56.449647   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.449915   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:56.452764   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453172   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.453204   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453363   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.453973   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454275   60008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:56.454328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.454443   60008 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:56.454466   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.457060   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457284   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457383   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457418   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457536   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457555   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457567   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457831   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457977   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458126   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458139   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.458282   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.537675   60008 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:56.564279   60008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:56.708113   60008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:56.716216   60008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:56.716301   60008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:56.738625   60008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:56.738643   60008 start.go:494] detecting cgroup driver to use...
	I0319 20:35:56.738707   60008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:56.756255   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:56.772725   60008 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:56.772785   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:56.793261   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:56.812368   60008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:56.948137   60008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:57.139143   60008 docker.go:233] disabling docker service ...
	I0319 20:35:57.139212   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:57.156414   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:57.173655   60008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:57.313924   60008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:57.459539   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:57.478913   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:57.506589   60008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:57.506663   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.520813   60008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:57.520871   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.534524   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.547833   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.568493   60008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:57.582367   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.595859   60008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.616441   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.633329   60008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:57.648803   60008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:57.648886   60008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:57.667845   60008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:57.680909   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:57.825114   60008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:57.996033   60008 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:57.996118   60008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:58.001875   60008 start.go:562] Will wait 60s for crictl version
	I0319 20:35:58.001947   60008 ssh_runner.go:195] Run: which crictl
	I0319 20:35:58.006570   60008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:58.060545   60008 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:58.060628   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.104858   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.148992   60008 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:58.150343   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:58.153222   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153634   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:58.153663   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153924   60008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:58.158830   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:58.174622   60008 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:58.174760   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:58.174819   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:58.220802   60008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:58.220879   60008 ssh_runner.go:195] Run: which lz4
	I0319 20:35:58.225914   60008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:58.230673   60008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:58.230702   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:59.959612   60008 crio.go:462] duration metric: took 1.733738299s to copy over tarball
	I0319 20:35:59.959694   60008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.782684   59019 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:35:57.783613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:57.784088   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:57.784180   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:57.784077   60806 retry.go:31] will retry after 304.011729ms: waiting for machine to come up
	I0319 20:35:58.089864   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.090398   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.090431   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.090325   60806 retry.go:31] will retry after 268.702281ms: waiting for machine to come up
	I0319 20:35:58.360743   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.361173   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.361201   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.361116   60806 retry.go:31] will retry after 373.34372ms: waiting for machine to come up
	I0319 20:35:58.735810   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.736490   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.736518   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.736439   60806 retry.go:31] will retry after 588.9164ms: waiting for machine to come up
	I0319 20:35:59.327363   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.327908   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.327938   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.327881   60806 retry.go:31] will retry after 623.38165ms: waiting for machine to come up
	I0319 20:35:59.952641   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.953108   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.953138   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.953090   60806 retry.go:31] will retry after 896.417339ms: waiting for machine to come up
	I0319 20:36:00.851032   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:00.851485   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:00.851514   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:00.851435   60806 retry.go:31] will retry after 869.189134ms: waiting for machine to come up
	I0319 20:35:58.168341   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:00.664629   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:02.594104   60008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634373226s)
	I0319 20:36:02.594140   60008 crio.go:469] duration metric: took 2.634502157s to extract the tarball
	I0319 20:36:02.594149   60008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:36:02.635454   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:02.692442   60008 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:36:02.692468   60008 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:36:02.692477   60008 kubeadm.go:928] updating node { 192.168.39.77 8444 v1.29.3 crio true true} ...
	I0319 20:36:02.692613   60008 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-385240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:02.692697   60008 ssh_runner.go:195] Run: crio config
	I0319 20:36:02.749775   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:02.749798   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:02.749809   60008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:02.749828   60008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385240 NodeName:default-k8s-diff-port-385240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:02.749967   60008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-385240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:02.750034   60008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:36:02.760788   60008 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:02.760843   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:02.770999   60008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0319 20:36:02.789881   60008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:36:02.809005   60008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:36:02.831122   60008 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:02.835609   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:02.850186   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:02.990032   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:03.013831   60008 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240 for IP: 192.168.39.77
	I0319 20:36:03.013858   60008 certs.go:194] generating shared ca certs ...
	I0319 20:36:03.013879   60008 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:03.014072   60008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:03.014125   60008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:03.014137   60008 certs.go:256] generating profile certs ...
	I0319 20:36:03.014256   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.key
	I0319 20:36:03.014325   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key.5c19d013
	I0319 20:36:03.014389   60008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key
	I0319 20:36:03.014549   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:03.014602   60008 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:03.014626   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:03.014658   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:03.014691   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:03.014728   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:03.014793   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:03.015673   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:03.070837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:03.115103   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:03.150575   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:03.210934   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 20:36:03.254812   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:36:03.286463   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:03.315596   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:03.347348   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:03.375837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:03.407035   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:03.439726   60008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:03.461675   60008 ssh_runner.go:195] Run: openssl version
	I0319 20:36:03.468238   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:03.482384   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487682   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487739   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.494591   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:03.509455   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:03.522545   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527556   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527617   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.533925   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:03.546851   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:03.559553   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564547   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564595   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.570824   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:03.584339   60008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:03.589542   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:03.595870   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:03.602530   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:03.609086   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:03.615621   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:03.622477   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:03.629097   60008 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:03.629186   60008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:03.629234   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.674484   60008 cri.go:89] found id: ""
	I0319 20:36:03.674568   60008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:03.686995   60008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:03.687020   60008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:03.687026   60008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:03.687094   60008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:03.702228   60008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:03.703334   60008 kubeconfig.go:125] found "default-k8s-diff-port-385240" server: "https://192.168.39.77:8444"
	I0319 20:36:03.705508   60008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:03.719948   60008 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.77
	I0319 20:36:03.719985   60008 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:03.719997   60008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:03.720073   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.761557   60008 cri.go:89] found id: ""
	I0319 20:36:03.761619   60008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:03.781849   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:03.793569   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:03.793601   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:03.793652   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:36:03.804555   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:03.804605   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:03.816728   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:36:03.828247   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:03.828318   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:03.840814   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.853100   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:03.853168   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.867348   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:36:03.879879   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:03.879944   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:03.893810   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:03.906056   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:04.038911   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.173514   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134566983s)
	I0319 20:36:05.173547   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.395951   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.480821   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.721671   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:01.722186   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:01.722212   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:01.722142   60806 retry.go:31] will retry after 997.299446ms: waiting for machine to come up
	I0319 20:36:02.720561   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:02.721007   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:02.721037   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:02.720958   60806 retry.go:31] will retry after 1.64420318s: waiting for machine to come up
	I0319 20:36:04.367668   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:04.368140   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:04.368179   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:04.368083   60806 retry.go:31] will retry after 1.972606192s: waiting for machine to come up
	I0319 20:36:06.342643   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:06.343192   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:06.343236   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:06.343136   60806 retry.go:31] will retry after 2.056060208s: waiting for machine to come up
	I0319 20:36:03.164447   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.665089   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.581797   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:05.581879   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.082565   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.582872   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.628756   60008 api_server.go:72] duration metric: took 1.046965637s to wait for apiserver process to appear ...
	I0319 20:36:06.628786   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:06.628808   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:06.629340   60008 api_server.go:269] stopped: https://192.168.39.77:8444/healthz: Get "https://192.168.39.77:8444/healthz": dial tcp 192.168.39.77:8444: connect: connection refused
	I0319 20:36:07.128890   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.231991   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.232024   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.232039   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.280784   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.280820   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.629356   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.660326   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:09.660434   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.128936   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.139305   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:10.139336   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.629187   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.635922   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:36:10.654111   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:36:10.654137   60008 api_server.go:131] duration metric: took 4.025345365s to wait for apiserver health ...
	I0319 20:36:10.654146   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:10.654154   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:10.656104   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.401478   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:08.402086   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:08.402111   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:08.402001   60806 retry.go:31] will retry after 2.487532232s: waiting for machine to come up
	I0319 20:36:10.891005   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:10.891550   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:10.891591   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:10.891503   60806 retry.go:31] will retry after 3.741447035s: waiting for machine to come up
	I0319 20:36:08.163468   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.165537   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:12.661667   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.657654   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:10.672795   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:10.715527   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:10.728811   60008 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:10.728850   60008 system_pods.go:61] "coredns-76f75df574-hsdk2" [319e5411-97e4-4021-80d0-b39195acb696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:10.728862   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [d10870b0-a0e1-47aa-baf9-07065c1d9142] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:10.728873   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [4925af1b-328f-42ee-b2ef-78b58fcbdd0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:10.728883   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [6dad1c39-3fbc-4364-9ed8-725c0f518191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:10.728889   60008 system_pods.go:61] "kube-proxy-bwj22" [9cc86566-612e-48bc-94c9-a2dad6978c92] Running
	I0319 20:36:10.728896   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [e9c38443-ea8c-4590-94ca-61077f850b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:10.728904   60008 system_pods.go:61] "metrics-server-57f55c9bc5-ddl2q" [ecb174e4-18b0-459e-afb1-137a1f6bdd67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:10.728919   60008 system_pods.go:61] "storage-provisioner" [95fb27b5-769c-4420-8021-3d97942c9f42] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0319 20:36:10.728931   60008 system_pods.go:74] duration metric: took 13.321799ms to wait for pod list to return data ...
	I0319 20:36:10.728944   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:10.743270   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:10.743312   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:10.743326   60008 node_conditions.go:105] duration metric: took 14.37332ms to run NodePressure ...
	I0319 20:36:10.743348   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:11.028786   60008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034096   60008 kubeadm.go:733] kubelet initialised
	I0319 20:36:11.034115   60008 kubeadm.go:734] duration metric: took 5.302543ms waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034122   60008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:11.040118   60008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.046021   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046048   60008 pod_ready.go:81] duration metric: took 5.906752ms for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.046060   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046069   60008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.051677   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051700   60008 pod_ready.go:81] duration metric: took 5.61463ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.051712   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051721   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.057867   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057893   60008 pod_ready.go:81] duration metric: took 6.163114ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.057905   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057912   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:13.065761   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.634526   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:14.635125   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:14.635155   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:14.635074   60806 retry.go:31] will retry after 3.841866145s: waiting for machine to come up
	I0319 20:36:14.662669   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.664913   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:15.565340   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:17.567623   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:19.570775   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.479276   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.479810   59019 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:36:18.479836   59019 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:36:18.479852   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.480232   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.480279   59019 main.go:141] libmachine: (no-preload-414130) DBG | skip adding static IP to network mk-no-preload-414130 - found existing host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"}
	I0319 20:36:18.480297   59019 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:36:18.480319   59019 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:36:18.480336   59019 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:36:18.482725   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483025   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.483052   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483228   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:36:18.483262   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:36:18.483299   59019 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:36:18.483320   59019 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:36:18.483373   59019 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:36:18.612349   59019 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:36:18.612766   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:36:18.613495   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.616106   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616459   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.616498   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616729   59019 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:36:18.616940   59019 machine.go:94] provisionDockerMachine start ...
	I0319 20:36:18.616957   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:18.617150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.619316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619599   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.619620   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619750   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.619895   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620054   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620166   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.620339   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.620508   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.620521   59019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:36:18.729177   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:36:18.729203   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729483   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:36:18.729511   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729728   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.732330   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732633   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.732664   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.732944   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733087   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733211   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.733347   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.733513   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.733528   59019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:36:18.857142   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:36:18.857178   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.860040   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860434   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.860465   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860682   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.860907   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861102   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861283   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.861462   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.861661   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.861685   59019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:36:18.976726   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:36:18.976755   59019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:36:18.976776   59019 buildroot.go:174] setting up certificates
	I0319 20:36:18.976789   59019 provision.go:84] configureAuth start
	I0319 20:36:18.976803   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.977095   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.980523   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.980948   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.980976   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.981150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.983394   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983720   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.983741   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983887   59019 provision.go:143] copyHostCerts
	I0319 20:36:18.983949   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:36:18.983959   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:36:18.984009   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:36:18.984092   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:36:18.984099   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:36:18.984118   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:36:18.984224   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:36:18.984237   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:36:18.984284   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:36:18.984348   59019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:36:19.241365   59019 provision.go:177] copyRemoteCerts
	I0319 20:36:19.241422   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:36:19.241445   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.244060   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244362   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.244388   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244593   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.244781   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.244956   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.245125   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.332749   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:36:19.360026   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:36:19.386680   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:36:19.414673   59019 provision.go:87] duration metric: took 437.87318ms to configureAuth
	I0319 20:36:19.414697   59019 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:36:19.414893   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:36:19.414964   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.417627   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.417949   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.417974   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.418139   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.418351   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418513   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418687   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.418854   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.419099   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.419120   59019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:36:19.712503   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:36:19.712538   59019 machine.go:97] duration metric: took 1.095583423s to provisionDockerMachine
	I0319 20:36:19.712554   59019 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:36:19.712573   59019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:36:19.712595   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.712918   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:36:19.712953   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.715455   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715779   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.715813   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715917   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.716098   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.716307   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.716455   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.801402   59019 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:36:19.806156   59019 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:36:19.806181   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:36:19.806253   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:36:19.806330   59019 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:36:19.806451   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:36:19.818601   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:19.845698   59019 start.go:296] duration metric: took 133.131789ms for postStartSetup
	I0319 20:36:19.845728   59019 fix.go:56] duration metric: took 23.395944884s for fixHost
	I0319 20:36:19.845746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.848343   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848727   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.848760   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848909   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.849090   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849256   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849452   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.849667   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.849843   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.849853   59019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:36:19.957555   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880579.901731357
	
	I0319 20:36:19.957574   59019 fix.go:216] guest clock: 1710880579.901731357
	I0319 20:36:19.957581   59019 fix.go:229] Guest: 2024-03-19 20:36:19.901731357 +0000 UTC Remote: 2024-03-19 20:36:19.845732308 +0000 UTC m=+363.236094224 (delta=55.999049ms)
	I0319 20:36:19.957612   59019 fix.go:200] guest clock delta is within tolerance: 55.999049ms
	I0319 20:36:19.957625   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 23.507874645s
	I0319 20:36:19.957656   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.957889   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:19.960613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.960930   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.960957   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.961108   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961627   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961804   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961883   59019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:36:19.961930   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.961996   59019 ssh_runner.go:195] Run: cat /version.json
	I0319 20:36:19.962022   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.964593   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.964790   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965034   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965057   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965250   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965368   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965397   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965416   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965529   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965611   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965677   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965764   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.965788   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965893   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:20.041410   59019 ssh_runner.go:195] Run: systemctl --version
	I0319 20:36:20.067540   59019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:36:20.214890   59019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:36:20.222680   59019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:36:20.222735   59019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:36:20.239981   59019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:36:20.240003   59019 start.go:494] detecting cgroup driver to use...
	I0319 20:36:20.240066   59019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:36:20.260435   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:36:20.277338   59019 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:36:20.277398   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:36:20.294069   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:36:20.309777   59019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:36:20.443260   59019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:36:20.595476   59019 docker.go:233] disabling docker service ...
	I0319 20:36:20.595552   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:36:20.612622   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:36:20.627717   59019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:36:20.790423   59019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:36:20.915434   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:36:20.932043   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:36:20.953955   59019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:36:20.954026   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.966160   59019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:36:20.966230   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.978217   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.990380   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.002669   59019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:36:21.014880   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.026125   59019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.045239   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.056611   59019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:36:21.067763   59019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:36:21.067818   59019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:36:21.084054   59019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:36:21.095014   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:21.237360   59019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:36:21.396979   59019 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:36:21.397047   59019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:36:21.402456   59019 start.go:562] Will wait 60s for crictl version
	I0319 20:36:21.402509   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.406963   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:36:21.446255   59019 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:36:21.446351   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.477273   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.519196   59019 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:36:21.520520   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:21.523401   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.523792   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:21.523822   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.524033   59019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:36:21.528973   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:21.543033   59019 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:36:21.543154   59019 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:36:21.543185   59019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:21.583439   59019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:36:21.583472   59019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:36:21.583515   59019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.583551   59019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.583566   59019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.583610   59019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.583622   59019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.583646   59019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.583731   59019 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:36:21.583766   59019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585216   59019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585225   59019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.585236   59019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.585210   59019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.585247   59019 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:36:21.585253   59019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.585285   59019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.585297   59019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:19.163241   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:21.165282   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:22.071931   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:24.567506   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.567537   60008 pod_ready.go:81] duration metric: took 13.509614974s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.567553   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573414   60008 pod_ready.go:92] pod "kube-proxy-bwj22" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.573444   60008 pod_ready.go:81] duration metric: took 5.881434ms for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573457   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580429   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.580452   60008 pod_ready.go:81] duration metric: took 6.984808ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580463   60008 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.722682   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.727610   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:36:21.738933   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.740326   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.772871   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.801213   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.829968   59019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:36:21.830008   59019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.830053   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.832291   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.945513   59019 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:36:21.945558   59019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.945612   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945618   59019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:36:21.945651   59019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.945663   59019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:36:21.945687   59019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.945695   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945721   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970009   59019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:36:21.970052   59019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.970079   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.970090   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970100   59019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:36:21.970125   59019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.970149   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970177   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:22.062153   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:36:22.062260   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.063754   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.063840   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.091003   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091052   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:22.091104   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091335   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:22.091372   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0319 20:36:22.091382   59019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091405   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:36:22.091423   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0319 20:36:22.091426   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091475   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:22.096817   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0319 20:36:22.155139   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.155289   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.190022   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0319 20:36:22.190072   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.190166   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.507872   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445006   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.353551966s)
	I0319 20:36:26.445031   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0319 20:36:26.445049   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445063   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (4.289744726s)
	I0319 20:36:26.445095   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445099   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445107   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (4.254920134s)
	I0319 20:36:26.445135   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445176   59019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.937263856s)
	I0319 20:36:26.445228   59019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:36:26.445254   59019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445296   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:23.665322   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.167485   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.588550   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:29.088665   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.407117   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (1.96198659s)
	I0319 20:36:28.407156   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:36:28.407176   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407171   59019 ssh_runner.go:235] Completed: which crictl: (1.961850083s)
	I0319 20:36:28.407212   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407244   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:30.495567   59019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088296063s)
	I0319 20:36:30.495590   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.088358118s)
	I0319 20:36:30.495606   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:36:30.495617   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:36:30.495633   59019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495686   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495735   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:28.662588   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.163637   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.589581   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:34.090180   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.473194   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977482574s)
	I0319 20:36:32.473238   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:36:32.473263   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:32.473260   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.977498716s)
	I0319 20:36:32.473294   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0319 20:36:32.473311   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:34.927774   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.454440131s)
	I0319 20:36:34.927813   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:36:34.927842   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:34.927888   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:33.664608   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.163358   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.588459   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:38.590173   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.512011   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.584091271s)
	I0319 20:36:37.512048   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0319 20:36:37.512077   59019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:37.512134   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:38.589202   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.077040733s)
	I0319 20:36:38.589231   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0319 20:36:38.589263   59019 cache_images.go:123] Successfully loaded all cached images
	I0319 20:36:38.589278   59019 cache_images.go:92] duration metric: took 17.005785801s to LoadCachedImages
	I0319 20:36:38.589291   59019 kubeadm.go:928] updating node { 192.168.72.29 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:36:38.589415   59019 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-414130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:38.589495   59019 ssh_runner.go:195] Run: crio config
	I0319 20:36:38.648312   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:38.648334   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:38.648346   59019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:38.648366   59019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-414130 NodeName:no-preload-414130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:38.648494   59019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-414130"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:38.648554   59019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:36:38.665850   59019 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:38.665928   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:38.678211   59019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0319 20:36:38.701657   59019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:36:38.721498   59019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0319 20:36:38.741159   59019 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:38.745617   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:38.759668   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:38.896211   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:38.916698   59019 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130 for IP: 192.168.72.29
	I0319 20:36:38.916720   59019 certs.go:194] generating shared ca certs ...
	I0319 20:36:38.916748   59019 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:38.916888   59019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:38.916930   59019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:38.916943   59019 certs.go:256] generating profile certs ...
	I0319 20:36:38.917055   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.key
	I0319 20:36:38.917134   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key.2d7d554c
	I0319 20:36:38.917185   59019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key
	I0319 20:36:38.917324   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:38.917381   59019 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:38.917396   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:38.917434   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:38.917469   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:38.917501   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:38.917553   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:38.918130   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:38.959630   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:39.007656   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:39.046666   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:39.078901   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:36:39.116600   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:36:39.158517   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:39.188494   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:39.218770   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:39.247341   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:39.275816   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:39.303434   59019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:39.326445   59019 ssh_runner.go:195] Run: openssl version
	I0319 20:36:39.333373   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:39.346280   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352619   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352686   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.359796   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:39.372480   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:39.384231   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389760   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389818   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.396639   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:39.408887   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:39.421847   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427779   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427848   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.434447   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:39.446945   59019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:39.452219   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:39.458729   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:39.465298   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:39.471931   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:39.478810   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:39.485551   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:39.492084   59019 kubeadm.go:391] StartCluster: {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:39.492210   59019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:39.492297   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.535094   59019 cri.go:89] found id: ""
	I0319 20:36:39.535157   59019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:39.549099   59019 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:39.549123   59019 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:39.549129   59019 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:39.549179   59019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:39.560565   59019 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:39.561570   59019 kubeconfig.go:125] found "no-preload-414130" server: "https://192.168.72.29:8443"
	I0319 20:36:39.563750   59019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:39.578708   59019 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.29
	I0319 20:36:39.578746   59019 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:39.578756   59019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:39.578799   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.620091   59019 cri.go:89] found id: ""
	I0319 20:36:39.620152   59019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:39.639542   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:39.652115   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:39.652133   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:39.652190   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:36:39.664047   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:39.664114   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:39.675218   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:36:39.685482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:39.685533   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:39.695803   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.705482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:39.705538   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.715747   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:36:39.725260   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:39.725324   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:39.735246   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:39.745069   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:39.862945   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.548185   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.794369   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.891458   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.992790   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:40.992871   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.493489   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.164706   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:40.662753   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:42.663084   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.087924   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:43.087987   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.993208   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.040237   59019 api_server.go:72] duration metric: took 1.047447953s to wait for apiserver process to appear ...
	I0319 20:36:42.040278   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:42.040323   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:42.040927   59019 api_server.go:269] stopped: https://192.168.72.29:8443/healthz: Get "https://192.168.72.29:8443/healthz": dial tcp 192.168.72.29:8443: connect: connection refused
	I0319 20:36:42.541457   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.853765   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:44.853796   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:44.853834   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.967607   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:44.967648   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.040791   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.049359   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.049400   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.541024   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.545880   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.545907   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.041423   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.046075   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.046101   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.541147   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.546547   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.546587   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:44.664041   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.163545   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.040899   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.046413   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.046453   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:47.541051   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.547309   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.547334   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.040856   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.046293   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:48.046318   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.540858   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.545311   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:36:48.551941   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:36:48.551962   59019 api_server.go:131] duration metric: took 6.511678507s to wait for apiserver health ...
	I0319 20:36:48.551970   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:48.551976   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:48.553824   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:45.588011   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.589644   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:50.088130   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:48.555094   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:48.566904   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:48.592246   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:48.603249   59019 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:48.603277   59019 system_pods.go:61] "coredns-7db6d8ff4d-t42ph" [bc831304-6e17-452d-8059-22bb46bad525] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:48.603284   59019 system_pods.go:61] "etcd-no-preload-414130" [e2ac0f77-fade-4ac6-a472-58df4040a57d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:48.603294   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [1128c23f-0cc6-4cd4-aeed-32f3d4570e2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:48.603300   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [b03747b6-c3ed-44cf-bcc8-dc2cea408100] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:48.603304   59019 system_pods.go:61] "kube-proxy-dttkh" [23ac1cd6-588b-4745-9c0b-740f9f0e684c] Running
	I0319 20:36:48.603313   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [99fde84c-78d6-4c57-8889-c0d9f3b55a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:48.603318   59019 system_pods.go:61] "metrics-server-569cc877fc-jvlnl" [318246fd-b809-40fa-8aff-78eb33ea10fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:48.603322   59019 system_pods.go:61] "storage-provisioner" [80470118-b092-4ba1-b830-d6f13173434d] Running
	I0319 20:36:48.603327   59019 system_pods.go:74] duration metric: took 11.054488ms to wait for pod list to return data ...
	I0319 20:36:48.603336   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:48.606647   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:48.606667   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:48.606678   59019 node_conditions.go:105] duration metric: took 3.33741ms to run NodePressure ...
	I0319 20:36:48.606693   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:48.888146   59019 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898053   59019 kubeadm.go:733] kubelet initialised
	I0319 20:36:48.898073   59019 kubeadm.go:734] duration metric: took 9.903203ms waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898082   59019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:48.911305   59019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:50.918568   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:49.664061   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.162467   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.588174   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:55.088783   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:52.920852   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:54.419386   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.419410   59019 pod_ready.go:81] duration metric: took 5.508083852s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.419420   59019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926059   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.926081   59019 pod_ready.go:81] duration metric: took 506.65554ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926090   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930519   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.930538   59019 pod_ready.go:81] duration metric: took 4.441479ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930546   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.436969   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.436991   59019 pod_ready.go:81] duration metric: took 506.439126ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.437002   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443096   59019 pod_ready.go:92] pod "kube-proxy-dttkh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.443120   59019 pod_ready.go:81] duration metric: took 6.110267ms for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443132   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465091   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:56.465114   59019 pod_ready.go:81] duration metric: took 1.021974956s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465123   59019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.163556   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.663128   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:57.589188   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.093044   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:58.471804   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.478113   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:58.663274   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.663336   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.663834   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.588018   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.087994   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:02.973736   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.471180   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.161734   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.161995   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.091895   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.588324   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:07.971754   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.974080   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.162947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:11.661800   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.088585   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.588430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:12.475641   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.971849   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:13.662232   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:15.663770   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.588577   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.590602   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.972091   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.972471   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.473526   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.161717   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:20.166272   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:22.661804   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.088608   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:23.587236   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:23.975755   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.472094   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:24.663592   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.664475   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:25.589800   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.087868   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.088398   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:28.472705   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.972037   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:29.162647   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.662382   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:32.088569   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.587686   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:32.972676   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:35.471024   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.161665   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.169093   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.587810   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.590891   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:37.472396   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:39.472831   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.662719   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:40.663337   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:41.088396   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.089545   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.972560   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:44.471857   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.162059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.163324   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:47.662016   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.588501   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:48.087736   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:50.091413   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:46.972679   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.473552   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.665655   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.164565   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.587562   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.587929   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:51.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.471542   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.472616   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.663037   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.666932   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.588855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.087276   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:58.971607   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.474225   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.165581   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.169140   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.087715   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.092449   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.972924   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.473336   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.665054   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.666413   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.588698   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.087857   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.088761   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:08.475109   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.973956   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.162101   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.663715   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:12.588671   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.088453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:13.473479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.971583   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:13.162747   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.163005   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.662157   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.587779   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:19.588889   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:18.473455   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.473644   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.162290   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:22.167423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.589226   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.090617   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:22.473728   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.971753   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.663722   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.665702   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.589574   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.088379   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:38:26.972361   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.471757   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.473307   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:28.666420   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.162701   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.089336   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.587759   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.473519   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:35.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.665536   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.161727   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.087816   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.587570   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:38.471727   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.472995   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.162339   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.162741   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:42.661979   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.588023   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.088381   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:45.091312   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:42.474517   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.971377   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.662325   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.663603   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:47.587861   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:50.086555   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.975287   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.475667   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.164849   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:51.661947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.087983   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.588099   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:51.971311   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:53.971907   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:55.972358   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.162991   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.163316   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.588120   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:59.090000   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:57.973293   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.471215   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:58.662516   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.663859   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.588049   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.589283   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.471981   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:04.472495   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.164344   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:05.665651   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:06.086880   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.088337   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:06.971135   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.971957   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.473482   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.162486   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.662096   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:12.662841   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.587271   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:13.087563   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:15.088454   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:13.972462   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.471598   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:14.664028   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.665749   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.587968   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.087138   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.471693   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.473198   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:19.162423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.164242   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:22.087921   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.089243   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:22.971141   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.971943   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:23.663805   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.162590   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.587708   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.588720   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.972042   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.972160   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:30.973402   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.664060   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.161446   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.092087   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.588849   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.473205   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.971076   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.162170   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.164007   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:37.662820   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.588912   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:38.086910   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:40.089385   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:37.971651   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.973467   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.663277   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.162392   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.587250   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.589855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.472493   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.973064   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.162740   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:45.162232   59415 pod_ready.go:81] duration metric: took 4m0.006756965s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:39:45.162255   59415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:39:45.162262   59415 pod_ready.go:38] duration metric: took 4m8.418792568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:39:45.162277   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:39:45.162309   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.162363   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.219659   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:45.219685   59415 cri.go:89] found id: ""
	I0319 20:39:45.219694   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:45.219737   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.225012   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.225072   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.268783   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.268803   59415 cri.go:89] found id: ""
	I0319 20:39:45.268810   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:45.268875   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.273758   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.273813   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:45.316870   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.316893   59415 cri.go:89] found id: ""
	I0319 20:39:45.316901   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:45.316942   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.321910   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:45.321968   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:45.360077   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:45.360098   59415 cri.go:89] found id: ""
	I0319 20:39:45.360105   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:45.360157   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.365517   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:45.365580   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:45.407686   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:45.407704   59415 cri.go:89] found id: ""
	I0319 20:39:45.407711   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:45.407752   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.412894   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:45.412954   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:45.451930   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:45.451953   59415 cri.go:89] found id: ""
	I0319 20:39:45.451964   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:45.452009   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.456634   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:45.456699   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:45.498575   59415 cri.go:89] found id: ""
	I0319 20:39:45.498601   59415 logs.go:276] 0 containers: []
	W0319 20:39:45.498611   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:45.498619   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:45.498678   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:45.548381   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.548400   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.548405   59415 cri.go:89] found id: ""
	I0319 20:39:45.548411   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:45.548469   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.553470   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.558445   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:45.558471   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.603464   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:45.603490   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.650631   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:45.650663   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:45.668744   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:45.668775   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:45.823596   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:45.823625   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.891879   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:45.891911   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.944237   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:45.944284   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:46.005819   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:46.005848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:46.069819   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.069848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.648008   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.648051   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:46.701035   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.701073   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.753159   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:46.753189   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:46.804730   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:46.804767   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:47.087453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.088165   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:47.471171   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.472374   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:49.351186   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.368780   59415 api_server.go:72] duration metric: took 4m19.832131165s to wait for apiserver process to appear ...
	I0319 20:39:49.368806   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:39:49.368844   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:49.368913   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:49.408912   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:49.408937   59415 cri.go:89] found id: ""
	I0319 20:39:49.408947   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:49.409010   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.414194   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:49.414263   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:49.456271   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.456298   59415 cri.go:89] found id: ""
	I0319 20:39:49.456307   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:49.456374   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.461250   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:49.461316   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:49.510029   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:49.510052   59415 cri.go:89] found id: ""
	I0319 20:39:49.510061   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:49.510119   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.515604   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:49.515667   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:49.561004   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:49.561026   59415 cri.go:89] found id: ""
	I0319 20:39:49.561034   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:49.561100   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.566205   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:49.566276   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:49.610666   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:49.610685   59415 cri.go:89] found id: ""
	I0319 20:39:49.610693   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:49.610735   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.615683   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:49.615730   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:49.657632   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:49.657648   59415 cri.go:89] found id: ""
	I0319 20:39:49.657655   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:49.657711   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.662128   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:49.662172   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:49.699037   59415 cri.go:89] found id: ""
	I0319 20:39:49.699060   59415 logs.go:276] 0 containers: []
	W0319 20:39:49.699068   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:49.699074   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:49.699131   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:49.754331   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:49.754353   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:49.754359   59415 cri.go:89] found id: ""
	I0319 20:39:49.754368   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:49.754437   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.759210   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.763797   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:49.763816   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.818285   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:49.818314   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:49.946232   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:49.946266   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.994160   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:49.994186   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:50.042893   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:50.042923   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:50.099333   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:50.099362   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:50.547046   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:50.547082   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:50.593081   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:50.593111   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:50.632611   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:50.632643   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:50.689610   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:50.689641   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:50.707961   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:50.707997   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:50.752684   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:50.752713   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:50.790114   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:50.790139   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:51.089647   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.588183   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:39:51.972170   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:54.471260   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:56.472093   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.338254   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:39:53.343669   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:39:53.344796   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:39:53.344816   59415 api_server.go:131] duration metric: took 3.976004163s to wait for apiserver health ...
	I0319 20:39:53.344824   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:39:53.344854   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:53.344896   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:53.407914   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.407939   59415 cri.go:89] found id: ""
	I0319 20:39:53.407948   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:53.408000   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.414299   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:53.414360   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:53.466923   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:53.466944   59415 cri.go:89] found id: ""
	I0319 20:39:53.466953   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:53.467006   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.472181   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:53.472247   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:53.511808   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:53.511830   59415 cri.go:89] found id: ""
	I0319 20:39:53.511839   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:53.511900   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.517386   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:53.517445   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:53.560360   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:53.560383   59415 cri.go:89] found id: ""
	I0319 20:39:53.560390   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:53.560433   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.565131   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:53.565181   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:53.611243   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:53.611264   59415 cri.go:89] found id: ""
	I0319 20:39:53.611273   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:53.611326   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.616327   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:53.616391   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:53.656775   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:53.656794   59415 cri.go:89] found id: ""
	I0319 20:39:53.656801   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:53.656846   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.661915   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:53.661966   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:53.700363   59415 cri.go:89] found id: ""
	I0319 20:39:53.700389   59415 logs.go:276] 0 containers: []
	W0319 20:39:53.700396   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:53.700401   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:53.700454   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:53.750337   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:53.750357   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:53.750360   59415 cri.go:89] found id: ""
	I0319 20:39:53.750373   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:53.750426   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.755835   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.761078   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:53.761099   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:53.812898   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:53.812928   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:53.934451   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:53.934482   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.989117   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:53.989148   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:54.386028   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:54.386060   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:54.437864   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:54.437893   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:54.456559   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:54.456584   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:54.506564   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:54.506593   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:54.551120   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:54.551151   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:54.595768   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:54.595794   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:54.637715   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:54.637745   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:54.689666   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:54.689706   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:54.731821   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:54.731851   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:57.287839   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:39:57.287866   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.287870   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.287874   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.287878   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.287881   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.287884   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.287890   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.287894   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.287901   59415 system_pods.go:74] duration metric: took 3.943071923s to wait for pod list to return data ...
	I0319 20:39:57.287907   59415 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:39:57.290568   59415 default_sa.go:45] found service account: "default"
	I0319 20:39:57.290587   59415 default_sa.go:55] duration metric: took 2.674741ms for default service account to be created ...
	I0319 20:39:57.290594   59415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:39:57.296691   59415 system_pods.go:86] 8 kube-system pods found
	I0319 20:39:57.296710   59415 system_pods.go:89] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.296718   59415 system_pods.go:89] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.296722   59415 system_pods.go:89] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.296726   59415 system_pods.go:89] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.296730   59415 system_pods.go:89] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.296734   59415 system_pods.go:89] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.296741   59415 system_pods.go:89] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.296747   59415 system_pods.go:89] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.296753   59415 system_pods.go:126] duration metric: took 6.154905ms to wait for k8s-apps to be running ...
	I0319 20:39:57.296762   59415 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:39:57.296803   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:57.313729   59415 system_svc.go:56] duration metric: took 16.960151ms WaitForService to wait for kubelet
	I0319 20:39:57.313753   59415 kubeadm.go:576] duration metric: took 4m27.777105553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:39:57.313777   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:39:57.316765   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:39:57.316789   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:39:57.316803   59415 node_conditions.go:105] duration metric: took 3.021397ms to run NodePressure ...
	I0319 20:39:57.316813   59415 start.go:240] waiting for startup goroutines ...
	I0319 20:39:57.316820   59415 start.go:245] waiting for cluster config update ...
	I0319 20:39:57.316830   59415 start.go:254] writing updated cluster config ...
	I0319 20:39:57.317087   59415 ssh_runner.go:195] Run: rm -f paused
	I0319 20:39:57.365814   59415 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:39:57.368111   59415 out.go:177] * Done! kubectl is now configured to use "embed-certs-421660" cluster and "default" namespace by default
	I0319 20:39:56.088199   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.088480   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.091027   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.971917   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.972329   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:02.589430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.088313   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:03.474330   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.972928   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:07.587315   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:09.588829   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:08.471254   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:10.472963   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.087905   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:14.589786   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.973661   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:15.471559   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.087489   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.087559   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.473159   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.975538   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:21.090446   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:23.588215   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.581466   60008 pod_ready.go:81] duration metric: took 4m0.000988658s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:24.581495   60008 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:40:24.581512   60008 pod_ready.go:38] duration metric: took 4m13.547382951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:24.581535   60008 kubeadm.go:591] duration metric: took 4m20.894503953s to restartPrimaryControlPlane
	W0319 20:40:24.581583   60008 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:24.581611   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:22.472853   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.972183   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:26.973460   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:28.974127   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:31.475479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:33.973020   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:36.471909   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:38.473008   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:40.975638   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:43.473149   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:45.474566   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.972615   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:50.472593   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:52.973302   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:55.472067   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:56.465422   59019 pod_ready.go:81] duration metric: took 4m0.000285496s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:56.465453   59019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0319 20:40:56.465495   59019 pod_ready.go:38] duration metric: took 4m7.567400515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:56.465521   59019 kubeadm.go:591] duration metric: took 4m16.916387223s to restartPrimaryControlPlane
	W0319 20:40:56.465574   59019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:56.465604   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:56.963018   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.381377433s)
	I0319 20:40:56.963106   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:40:56.982252   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:40:56.994310   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:40:57.004950   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:40:57.004974   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:40:57.005018   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:40:57.015009   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:40:57.015070   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:40:57.026153   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:40:57.036560   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:40:57.036611   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:40:57.047469   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.060137   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:40:57.060188   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.073305   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:40:57.083299   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:40:57.083372   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:40:57.093788   60008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:40:57.352358   60008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:05.910387   60008 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:41:05.910460   60008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:05.910542   60008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:05.910660   60008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:05.910798   60008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:05.910903   60008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:05.912366   60008 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:05.912439   60008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:05.912493   60008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:05.912563   60008 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:05.912614   60008 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:05.912673   60008 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:05.912726   60008 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:05.912809   60008 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:05.912874   60008 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:05.912975   60008 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:05.913082   60008 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:05.913142   60008 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:05.913197   60008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:05.913258   60008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:05.913363   60008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:05.913439   60008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:05.913536   60008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:05.913616   60008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:05.913738   60008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:05.913841   60008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:05.915394   60008 out.go:204]   - Booting up control plane ...
	I0319 20:41:05.915486   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:05.915589   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:05.915682   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:05.915832   60008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:05.915951   60008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:05.916010   60008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:05.916154   60008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:05.916255   60008 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505433 seconds
	I0319 20:41:05.916392   60008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:05.916545   60008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:05.916628   60008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:05.916839   60008 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-385240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:05.916908   60008 kubeadm.go:309] [bootstrap-token] Using token: y9pq78.ls188thm3dr5dool
	I0319 20:41:05.918444   60008 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:05.918567   60008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:05.918654   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:05.918821   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:05.918999   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:05.919147   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:05.919260   60008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:05.919429   60008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:05.919498   60008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:05.919572   60008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:05.919582   60008 kubeadm.go:309] 
	I0319 20:41:05.919665   60008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:05.919678   60008 kubeadm.go:309] 
	I0319 20:41:05.919787   60008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:05.919799   60008 kubeadm.go:309] 
	I0319 20:41:05.919834   60008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:05.919929   60008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:05.920007   60008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:05.920017   60008 kubeadm.go:309] 
	I0319 20:41:05.920102   60008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:05.920112   60008 kubeadm.go:309] 
	I0319 20:41:05.920182   60008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:05.920191   60008 kubeadm.go:309] 
	I0319 20:41:05.920284   60008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:05.920411   60008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:05.920506   60008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:05.920520   60008 kubeadm.go:309] 
	I0319 20:41:05.920648   60008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:05.920762   60008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:05.920771   60008 kubeadm.go:309] 
	I0319 20:41:05.920901   60008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921063   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:05.921099   60008 kubeadm.go:309] 	--control-plane 
	I0319 20:41:05.921105   60008 kubeadm.go:309] 
	I0319 20:41:05.921207   60008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:05.921216   60008 kubeadm.go:309] 
	I0319 20:41:05.921285   60008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921386   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:05.921397   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:41:05.921403   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:05.922921   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:05.924221   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:05.941888   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:06.040294   60008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:06.040378   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.040413   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-385240 minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=default-k8s-diff-port-385240 minikube.k8s.io/primary=true
	I0319 20:41:06.104038   60008 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:06.266168   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.766345   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.266622   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.766418   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.266864   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.766777   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.266420   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.766319   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:10.266990   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:10.766714   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.266839   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.767222   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.266933   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.766390   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.266562   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.766618   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.267159   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.767010   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.266307   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.767002   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.266488   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.766567   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.266789   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.766935   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.266312   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.767202   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.904766   60008 kubeadm.go:1107] duration metric: took 12.864451937s to wait for elevateKubeSystemPrivileges
	W0319 20:41:18.904802   60008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:18.904810   60008 kubeadm.go:393] duration metric: took 5m15.275720912s to StartCluster
	I0319 20:41:18.904826   60008 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.904910   60008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:18.906545   60008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.906817   60008 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:18.908538   60008 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:18.906944   60008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:18.907019   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:41:18.910084   60008 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:18.910100   60008 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910125   60008 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:18.910135   60008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385240"
	W0319 20:41:18.910141   60008 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:18.910255   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910127   60008 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.910313   60008 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:18.910334   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910603   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910635   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910647   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910667   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910692   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910671   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.927094   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0319 20:41:18.927240   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0319 20:41:18.927517   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.927620   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928036   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928059   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928074   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0319 20:41:18.928331   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928360   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928492   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928538   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928737   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928993   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.929009   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.929046   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.929066   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929108   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.929338   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.929862   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929893   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.932815   60008 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.932838   60008 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:18.932865   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.933211   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.933241   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.945888   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0319 20:41:18.946351   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.946842   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.946869   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.947426   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.947600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.947808   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0319 20:41:18.948220   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.948367   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0319 20:41:18.948739   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.948753   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.949222   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.949277   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.951252   60008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:18.949736   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.950173   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.951720   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.952838   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.952813   60008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:18.952917   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:18.952934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.952815   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.953264   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.953460   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.955228   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.957199   60008 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:18.958698   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:18.958715   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:18.958733   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.956502   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.957073   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.958806   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.958845   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.959306   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.959485   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.959783   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.961410   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961775   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.961802   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961893   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.962065   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.962213   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.962369   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.975560   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0319 20:41:18.976026   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.976503   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.976524   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.976893   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.977128   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.978582   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.978862   60008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:18.978881   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:18.978898   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.981356   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981730   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.981762   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.982056   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.982192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.982337   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:19.126985   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:19.188792   60008 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198961   60008 node_ready.go:49] node "default-k8s-diff-port-385240" has status "Ready":"True"
	I0319 20:41:19.198981   60008 node_ready.go:38] duration metric: took 10.160382ms for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198992   60008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:19.209346   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:19.335212   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:19.414291   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:19.506570   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:19.506590   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:19.651892   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:19.651916   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:19.808237   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:19.808282   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:19.924353   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:20.583635   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169310347s)
	I0319 20:41:20.583700   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.583717   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.583981   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.583991   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.584015   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.584027   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.584253   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.584282   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585518   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250274289s)
	I0319 20:41:20.585568   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585584   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.585855   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.585879   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.585888   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585902   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585916   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.586162   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.586168   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.586177   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.609166   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.609183   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.609453   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.609492   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.609502   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.750409   60008 pod_ready.go:92] pod "coredns-76f75df574-4rq6h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:20.750433   60008 pod_ready.go:81] duration metric: took 1.541065393s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.750442   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.869692   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.869719   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.869995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.870000   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870025   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870045   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.870057   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.870336   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870352   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870366   60008 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:20.872093   60008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:41:20.873465   60008 addons.go:505] duration metric: took 1.966520277s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:41:21.260509   60008 pod_ready.go:92] pod "coredns-76f75df574-swxdt" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.260533   60008 pod_ready.go:81] duration metric: took 510.083899ms for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.260543   60008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268298   60008 pod_ready.go:92] pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.268324   60008 pod_ready.go:81] duration metric: took 7.772878ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268335   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274436   60008 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.274461   60008 pod_ready.go:81] duration metric: took 6.117464ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274472   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281324   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.281347   60008 pod_ready.go:81] duration metric: took 6.866088ms for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281367   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.593980   60008 pod_ready.go:92] pod "kube-proxy-j7ghm" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.594001   60008 pod_ready.go:81] duration metric: took 312.62702ms for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.594009   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993321   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.993346   60008 pod_ready.go:81] duration metric: took 399.330556ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993362   60008 pod_ready.go:38] duration metric: took 2.794359581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:21.993375   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:21.993423   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:22.010583   60008 api_server.go:72] duration metric: took 3.10372573s to wait for apiserver process to appear ...
	I0319 20:41:22.010609   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:22.010629   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:41:22.015218   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:41:22.016276   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:41:22.016291   60008 api_server.go:131] duration metric: took 5.6763ms to wait for apiserver health ...
	I0319 20:41:22.016298   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:22.197418   60008 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:22.197454   60008 system_pods.go:61] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.197460   60008 system_pods.go:61] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.197465   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.197470   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.197476   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.197481   60008 system_pods.go:61] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.197485   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.197494   60008 system_pods.go:61] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.197500   60008 system_pods.go:61] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.197514   60008 system_pods.go:74] duration metric: took 181.210964ms to wait for pod list to return data ...
	I0319 20:41:22.197526   60008 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:22.392702   60008 default_sa.go:45] found service account: "default"
	I0319 20:41:22.392738   60008 default_sa.go:55] duration metric: took 195.195704ms for default service account to be created ...
	I0319 20:41:22.392751   60008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:22.595946   60008 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:22.595975   60008 system_pods.go:89] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.595980   60008 system_pods.go:89] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.595985   60008 system_pods.go:89] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.595991   60008 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.595996   60008 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.596006   60008 system_pods.go:89] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.596010   60008 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.596016   60008 system_pods.go:89] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.596022   60008 system_pods.go:89] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.596034   60008 system_pods.go:126] duration metric: took 203.277741ms to wait for k8s-apps to be running ...
	I0319 20:41:22.596043   60008 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:22.596087   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:22.615372   60008 system_svc.go:56] duration metric: took 19.319488ms WaitForService to wait for kubelet
	I0319 20:41:22.615396   60008 kubeadm.go:576] duration metric: took 3.708546167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:22.615413   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:22.793277   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:22.793303   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:22.793313   60008 node_conditions.go:105] duration metric: took 177.89499ms to run NodePressure ...
	I0319 20:41:22.793325   60008 start.go:240] waiting for startup goroutines ...
	I0319 20:41:22.793331   60008 start.go:245] waiting for cluster config update ...
	I0319 20:41:22.793342   60008 start.go:254] writing updated cluster config ...
	I0319 20:41:22.793598   60008 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:22.845339   60008 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:41:22.847429   60008 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-385240" cluster and "default" namespace by default
	I0319 20:41:29.064044   59019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.598411816s)
	I0319 20:41:29.064115   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:29.082924   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:41:29.095050   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:29.106905   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:29.106918   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:29.106962   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:29.118153   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:29.118209   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:29.128632   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:29.140341   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:29.140401   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:29.151723   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.162305   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:29.162365   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.173654   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:29.185155   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:29.185211   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:29.196015   59019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:29.260934   59019 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0319 20:41:29.261054   59019 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:29.412424   59019 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:29.412592   59019 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:29.412759   59019 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:29.636019   59019 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:29.638046   59019 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:29.638158   59019 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:29.638216   59019 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:29.638279   59019 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:29.638331   59019 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:29.645456   59019 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:29.645553   59019 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:29.645610   59019 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:29.645663   59019 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:29.645725   59019 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:29.645788   59019 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:29.645822   59019 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:29.645869   59019 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:29.895850   59019 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:30.248635   59019 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:30.380474   59019 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:30.457908   59019 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:30.585194   59019 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:30.585852   59019 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:30.588394   59019 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:30.590147   59019 out.go:204]   - Booting up control plane ...
	I0319 20:41:30.590241   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:30.590353   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:30.590606   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:30.611645   59019 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:30.614010   59019 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:30.614266   59019 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:30.757838   59019 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 20:41:30.757973   59019 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0319 20:41:31.758717   59019 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001332477s
	I0319 20:41:31.758819   59019 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 20:41:37.261282   59019 kubeadm.go:309] [api-check] The API server is healthy after 5.50238s
	I0319 20:41:37.275017   59019 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:37.299605   59019 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:37.335190   59019 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:37.335449   59019 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-414130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:37.350882   59019 kubeadm.go:309] [bootstrap-token] Using token: 0euy3c.pb7fih13u47u7k5a
	I0319 20:41:37.352692   59019 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:37.352796   59019 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:37.357551   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:37.365951   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:37.369544   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:37.376066   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:37.379284   59019 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:37.669667   59019 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:38.120423   59019 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:38.668937   59019 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:38.670130   59019 kubeadm.go:309] 
	I0319 20:41:38.670236   59019 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:38.670251   59019 kubeadm.go:309] 
	I0319 20:41:38.670339   59019 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:38.670348   59019 kubeadm.go:309] 
	I0319 20:41:38.670369   59019 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:38.670451   59019 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:38.670520   59019 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:38.670530   59019 kubeadm.go:309] 
	I0319 20:41:38.670641   59019 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:38.670653   59019 kubeadm.go:309] 
	I0319 20:41:38.670720   59019 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:38.670731   59019 kubeadm.go:309] 
	I0319 20:41:38.670802   59019 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:38.670916   59019 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:38.671036   59019 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:38.671053   59019 kubeadm.go:309] 
	I0319 20:41:38.671185   59019 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:38.671332   59019 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:38.671351   59019 kubeadm.go:309] 
	I0319 20:41:38.671438   59019 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671588   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:38.671609   59019 kubeadm.go:309] 	--control-plane 
	I0319 20:41:38.671613   59019 kubeadm.go:309] 
	I0319 20:41:38.671684   59019 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:38.671693   59019 kubeadm.go:309] 
	I0319 20:41:38.671758   59019 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671877   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:38.672172   59019 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:38.672197   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:41:38.672212   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:38.674158   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:38.675618   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:38.690458   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:38.712520   59019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:38.712597   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:38.712616   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-414130 minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=no-preload-414130 minikube.k8s.io/primary=true
	I0319 20:41:38.902263   59019 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:38.902364   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.403054   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.903127   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.402786   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.903358   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.403414   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.902829   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.402506   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.903338   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.402784   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.902477   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.403152   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.903190   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.402544   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.903397   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:46.402785   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:46.902656   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.402845   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.903436   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.402511   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.903073   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.402559   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.902914   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.402708   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.903441   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.403416   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.585670   59019 kubeadm.go:1107] duration metric: took 12.873132825s to wait for elevateKubeSystemPrivileges
	W0319 20:41:51.585714   59019 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:51.585724   59019 kubeadm.go:393] duration metric: took 5m12.093644869s to StartCluster
	I0319 20:41:51.585744   59019 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.585835   59019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:51.588306   59019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.588634   59019 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:51.590331   59019 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:51.588755   59019 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:51.588891   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:41:51.590430   59019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-414130"
	I0319 20:41:51.591988   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:51.592020   59019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-414130"
	W0319 20:41:51.592038   59019 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:51.592069   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.590437   59019 addons.go:69] Setting default-storageclass=true in profile "no-preload-414130"
	I0319 20:41:51.590441   59019 addons.go:69] Setting metrics-server=true in profile "no-preload-414130"
	I0319 20:41:51.592098   59019 addons.go:234] Setting addon metrics-server=true in "no-preload-414130"
	W0319 20:41:51.592114   59019 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:51.592129   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.592164   59019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-414130"
	I0319 20:41:51.592450   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592479   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592505   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592532   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.608909   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0319 20:41:51.609383   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.609942   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.609962   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.610565   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.610774   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.612725   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0319 20:41:51.612794   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0319 20:41:51.613141   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.613637   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.613660   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.614121   59019 addons.go:234] Setting addon default-storageclass=true in "no-preload-414130"
	W0319 20:41:51.614139   59019 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:51.614167   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.614214   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.614482   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614512   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614774   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614810   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614876   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.615336   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.615369   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.615703   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.616237   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.616281   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.630175   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0319 20:41:51.630802   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.631279   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.631296   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.631645   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.632322   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.632356   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.634429   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0319 20:41:51.634865   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.635311   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.635324   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.635922   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.636075   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.637997   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.640025   59019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:51.641428   59019 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:51.641445   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:51.641462   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.644316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644838   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.644853   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644875   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0319 20:41:51.645162   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.645300   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.645365   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.645499   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.645613   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.645964   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.645976   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.646447   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.646663   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.648174   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.649872   59019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:51.651152   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:51.651177   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:51.651197   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.654111   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654523   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.654545   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654792   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.654987   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.655156   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.655281   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.656648   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0319 20:41:51.656960   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.657457   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.657471   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.657751   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.657948   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.659265   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.659503   59019 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:51.659517   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:51.659533   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.662039   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662427   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.662447   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662583   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.662757   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.662879   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.662991   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.845584   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:51.876597   59019 node_ready.go:35] waiting up to 6m0s for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886290   59019 node_ready.go:49] node "no-preload-414130" has status "Ready":"True"
	I0319 20:41:51.886308   59019 node_ready.go:38] duration metric: took 9.684309ms for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886315   59019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:51.893456   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:51.976850   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:52.031123   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:52.031144   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:52.133184   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:52.195945   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:52.195968   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:52.270721   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.270745   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:52.407604   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.578113   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578140   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578511   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578524   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.578532   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.578557   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578566   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578809   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578828   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.610849   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.610873   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.611246   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.611251   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.611269   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.342742   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.209525982s)
	I0319 20:41:53.342797   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.342808   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343131   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343159   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343163   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.343174   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.343194   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343486   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343503   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343525   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.450430   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.450458   59019 pod_ready.go:81] duration metric: took 1.556981953s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.450478   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459425   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.459454   59019 pod_ready.go:81] duration metric: took 8.967211ms for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459467   59019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495144   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.495164   59019 pod_ready.go:81] duration metric: took 35.690498ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495173   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520382   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.520412   59019 pod_ready.go:81] duration metric: took 25.23062ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520426   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530859   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.530889   59019 pod_ready.go:81] duration metric: took 10.451233ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530903   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.545946   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.13830463s)
	I0319 20:41:53.545994   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546009   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546304   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546323   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546333   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546350   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546678   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546695   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546706   59019 addons.go:470] Verifying addon metrics-server=true in "no-preload-414130"
	I0319 20:41:53.546764   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.548523   59019 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0319 20:41:53.549990   59019 addons.go:505] duration metric: took 1.961237309s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0319 20:41:53.881082   59019 pod_ready.go:92] pod "kube-proxy-m7m4h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.881107   59019 pod_ready.go:81] duration metric: took 350.197776ms for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.881116   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283891   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:54.283924   59019 pod_ready.go:81] duration metric: took 402.800741ms for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283936   59019 pod_ready.go:38] duration metric: took 2.397611991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:54.283953   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:54.284016   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:54.304606   59019 api_server.go:72] duration metric: took 2.715931012s to wait for apiserver process to appear ...
	I0319 20:41:54.304629   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:54.304651   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:41:54.309292   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:41:54.310195   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:41:54.310215   59019 api_server.go:131] duration metric: took 5.579162ms to wait for apiserver health ...
	I0319 20:41:54.310225   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:54.488441   59019 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:54.488475   59019 system_pods.go:61] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.488482   59019 system_pods.go:61] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.488487   59019 system_pods.go:61] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.488491   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.488496   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.488500   59019 system_pods.go:61] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.488505   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.488514   59019 system_pods.go:61] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.488520   59019 system_pods.go:61] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.488530   59019 system_pods.go:74] duration metric: took 178.298577ms to wait for pod list to return data ...
	I0319 20:41:54.488543   59019 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:54.679537   59019 default_sa.go:45] found service account: "default"
	I0319 20:41:54.679560   59019 default_sa.go:55] duration metric: took 191.010696ms for default service account to be created ...
	I0319 20:41:54.679569   59019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:54.884163   59019 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:54.884197   59019 system_pods.go:89] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.884205   59019 system_pods.go:89] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.884211   59019 system_pods.go:89] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.884217   59019 system_pods.go:89] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.884223   59019 system_pods.go:89] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.884230   59019 system_pods.go:89] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.884236   59019 system_pods.go:89] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.884246   59019 system_pods.go:89] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.884268   59019 system_pods.go:89] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.884281   59019 system_pods.go:126] duration metric: took 204.70598ms to wait for k8s-apps to be running ...
	I0319 20:41:54.884294   59019 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:54.884348   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:54.901838   59019 system_svc.go:56] duration metric: took 17.536645ms WaitForService to wait for kubelet
	I0319 20:41:54.901869   59019 kubeadm.go:576] duration metric: took 3.313198534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:54.901887   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:55.080463   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:55.080485   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:55.080495   59019 node_conditions.go:105] duration metric: took 178.603035ms to run NodePressure ...
	I0319 20:41:55.080507   59019 start.go:240] waiting for startup goroutines ...
	I0319 20:41:55.080513   59019 start.go:245] waiting for cluster config update ...
	I0319 20:41:55.080523   59019 start.go:254] writing updated cluster config ...
	I0319 20:41:55.080753   59019 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:55.130477   59019 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0319 20:41:55.133906   59019 out.go:177] * Done! kubectl is now configured to use "no-preload-414130" cluster and "default" namespace by default
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 
	
	
	==> CRI-O <==
	Mar 19 20:50:24 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:24.977324611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881424977303346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37d2205d-a781-4ebd-a6c4-6a6ce1f73883 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:24 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:24.978032788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81056268-46a6-4fa7-b41d-f9eba2152a30 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:24 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:24.978608332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81056268-46a6-4fa7-b41d-f9eba2152a30 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:24 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:24.978921437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81056268-46a6-4fa7-b41d-f9eba2152a30 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.031296624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da405002-b6b9-474a-85cf-8376f0d5fae8 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.031498710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da405002-b6b9-474a-85cf-8376f0d5fae8 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.033744744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a44f5ec5-4f59-4c97-8fb0-e76608569a5d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.034798525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881425034762800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a44f5ec5-4f59-4c97-8fb0-e76608569a5d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.035643075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c51a098-a9cf-476e-9252-568ac1df72ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.035719914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c51a098-a9cf-476e-9252-568ac1df72ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.036007237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c51a098-a9cf-476e-9252-568ac1df72ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.087127618Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48e1ceca-3259-4109-87c2-fb3b3b9a1ca6 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.087198455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48e1ceca-3259-4109-87c2-fb3b3b9a1ca6 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.089718271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89f03d4c-52e6-40dc-8229-6ffa17eefb42 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.090113623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881425090090323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89f03d4c-52e6-40dc-8229-6ffa17eefb42 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.091018970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbc4d318-d2be-4f47-957c-54836d628fd3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.091070982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbc4d318-d2be-4f47-957c-54836d628fd3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.091266581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbc4d318-d2be-4f47-957c-54836d628fd3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.131850429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c13fbe8-62c4-41eb-a886-5e74c27817e7 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.131921362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c13fbe8-62c4-41eb-a886-5e74c27817e7 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.133850430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cd5fdb8-6919-48b7-92a0-2bc7cc86053a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.135251862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881425135158201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cd5fdb8-6919-48b7-92a0-2bc7cc86053a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.136306541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f4d07d6-c437-4f07-8ba5-8d88f54df9f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.136445774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f4d07d6-c437-4f07-8ba5-8d88f54df9f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:25 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:50:25.136796949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f4d07d6-c437-4f07-8ba5-8d88f54df9f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e5edce9fd30e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6a1d349cb1723       storage-provisioner
	8e9bbe7a0b88a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   94c90ce3b554f       coredns-76f75df574-swxdt
	373088355ffbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   468ab7d556f73       coredns-76f75df574-4rq6h
	65a6211bab4fa       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   a3653b80a5bd4       kube-proxy-j7ghm
	0ec51f453399c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   bf5b86b99d65a       kube-scheduler-default-k8s-diff-port-385240
	213fcde428339       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   30a50029292e9       kube-apiserver-default-k8s-diff-port-385240
	21a093811a77e       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   dcaf72d4992d5       kube-controller-manager-default-k8s-diff-port-385240
	ba041437b7854       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   b7c7cba2a4b8e       etcd-default-k8s-diff-port-385240
	28d1f1e818e44       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   6ab2a9e728b41       kube-apiserver-default-k8s-diff-port-385240
	
	
	==> coredns [373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-385240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-385240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=default-k8s-diff-port-385240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:41:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-385240
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:50:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:46:32 +0000   Tue, 19 Mar 2024 20:41:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:46:32 +0000   Tue, 19 Mar 2024 20:41:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:46:32 +0000   Tue, 19 Mar 2024 20:41:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:46:32 +0000   Tue, 19 Mar 2024 20:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    default-k8s-diff-port-385240
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75be40579a0849b998edf347aba225d2
	  System UUID:                75be4057-9a08-49b9-98ed-f347aba225d2
	  Boot ID:                    9233f891-93d8-4a92-9088-940e41dc6547
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4rq6h                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-76f75df574-swxdt                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-385240                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-385240             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-385240    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-j7ghm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-385240             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-nv288                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node default-k8s-diff-port-385240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node default-k8s-diff-port-385240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node default-k8s-diff-port-385240 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node default-k8s-diff-port-385240 event: Registered Node default-k8s-diff-port-385240 in Controller
	
	
	==> dmesg <==
	[  +0.053229] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.910546] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.516450] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681047] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.494753] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.057631] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075790] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.220764] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.157207] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.352029] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[Mar19 20:36] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.067842] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.327593] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +4.631759] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.500003] kauditd_printk_skb: 69 callbacks suppressed
	[Mar19 20:40] systemd-fstab-generator[3398]: Ignoring "noauto" option for root device
	[  +0.075881] kauditd_printk_skb: 7 callbacks suppressed
	[Mar19 20:41] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.074775] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.282761] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[  +0.128016] kauditd_printk_skb: 12 callbacks suppressed
	[Mar19 20:42] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82] <==
	{"level":"info","ts":"2024-03-19T20:41:00.257241Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:41:00.25725Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-19T20:41:00.29214Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-19T20:41:00.292346Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"226361457cf4c252","initial-advertise-peer-urls":["https://192.168.39.77:2380"],"listen-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-19T20:41:00.292374Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:41:00.292579Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-03-19T20:41:00.292595Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-03-19T20:41:00.821502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:00.821657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:00.821807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgPreVoteResp from 226361457cf4c252 at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:00.823487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:00.823612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgVoteResp from 226361457cf4c252 at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:00.823645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became leader at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:00.823753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226361457cf4c252 elected leader 226361457cf4c252 at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:00.828846Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"226361457cf4c252","local-member-attributes":"{Name:default-k8s-diff-port-385240 ClientURLs:[https://192.168.39.77:2379]}","request-path":"/0/members/226361457cf4c252/attributes","cluster-id":"b43d13dd46d94ad8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:41:00.829462Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:41:00.829518Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:00.833603Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:41:00.833649Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:41:00.835323Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.77:2379"}
	{"level":"info","ts":"2024-03-19T20:41:00.829547Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:41:00.83689Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:00.846894Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:00.851708Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:00.848063Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:50:25 up 14 min,  0 users,  load average: 0.12, 0.26, 0.23
	Linux default-k8s-diff-port-385240 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee] <==
	I0319 20:44:21.659712       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:46:02.665065       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:02.665210       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0319 20:46:03.665583       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:03.665648       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:46:03.665663       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:46:03.665745       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:03.665905       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:46:03.667162       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:47:03.666645       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:47:03.666831       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:47:03.666860       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:47:03.668086       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:47:03.668230       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:47:03.668280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:49:03.667746       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:49:03.667852       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:49:03.667868       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:49:03.669087       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:49:03.669252       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:49:03.669309       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946] <==
	W0319 20:40:52.995687       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.070035       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.169092       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.295652       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.348769       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.359805       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.394784       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.429823       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.435503       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.480264       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.484996       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.542805       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.546747       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.571709       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.646687       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.663537       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.668024       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.964273       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.007596       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.155302       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.441699       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.569152       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.786384       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.886057       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:55.052270       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf] <==
	I0319 20:44:48.366161       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:45:17.855769       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:45:18.377163       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:45:47.861722       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:45:48.386995       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:46:17.867919       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:46:18.395035       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:46:47.874595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:46:48.403831       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:47:17.881252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:47:18.413613       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:47:19.996490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="303.507µs"
	I0319 20:47:33.987311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="145.388µs"
	E0319 20:47:47.887272       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:47:48.422216       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:48:17.893161       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:48:18.431612       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:48:47.900887       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:48:48.441938       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:49:17.907241       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:49:18.450709       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:49:47.913238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:49:48.458658       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:50:17.920767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:50:18.467556       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827] <==
	I0319 20:41:19.692516       1 server_others.go:72] "Using iptables proxy"
	I0319 20:41:19.726024       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	I0319 20:41:19.916370       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:41:19.916439       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:41:19.916498       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:41:19.923760       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:41:19.924272       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:41:19.924951       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:41:19.926009       1 config.go:188] "Starting service config controller"
	I0319 20:41:19.926030       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:41:19.926052       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:41:19.926056       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:41:19.926769       1 config.go:315] "Starting node config controller"
	I0319 20:41:19.926779       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:41:20.027569       1 shared_informer.go:318] Caches are synced for node config
	I0319 20:41:20.027643       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:41:20.027689       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c] <==
	E0319 20:41:02.745729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 20:41:02.745736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 20:41:02.745747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 20:41:02.745754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:02.745965       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:41:02.746126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:41:02.755963       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:41:02.756056       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:41:03.579139       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:41:03.579245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:41:03.591023       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:41:03.591163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:41:03.629227       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:41:03.629293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:41:03.673318       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:03.673487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:03.791689       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 20:41:03.791746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:03.948558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 20:41:03.948659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 20:41:03.982355       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:41:03.982513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 20:41:04.182325       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:41:04.182386       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 20:41:06.092564       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:48:06 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:48:06.033774    3723 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:48:06 default-k8s-diff-port-385240 kubelet[3723]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:48:06 default-k8s-diff-port-385240 kubelet[3723]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:48:06 default-k8s-diff-port-385240 kubelet[3723]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:48:06 default-k8s-diff-port-385240 kubelet[3723]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:48:12 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:48:12.969453    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:48:25 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:48:25.970238    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:48:40 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:48:40.969709    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:48:53 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:48:53.969690    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:49:06 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:49:06.026161    3723 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:49:06 default-k8s-diff-port-385240 kubelet[3723]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:49:06 default-k8s-diff-port-385240 kubelet[3723]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:49:06 default-k8s-diff-port-385240 kubelet[3723]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:49:06 default-k8s-diff-port-385240 kubelet[3723]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:49:07 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:49:07.969286    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:49:22 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:49:22.970921    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:49:36 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:49:36.969177    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:49:47 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:49:47.970634    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:50:01 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:50:01.971138    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:50:06 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:50:06.026226    3723 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:50:06 default-k8s-diff-port-385240 kubelet[3723]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:50:06 default-k8s-diff-port-385240 kubelet[3723]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:50:06 default-k8s-diff-port-385240 kubelet[3723]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:50:06 default-k8s-diff-port-385240 kubelet[3723]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:50:15 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:50:15.971806    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	
	
	==> storage-provisioner [e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b] <==
	I0319 20:41:21.105613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:41:21.128050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:41:21.128241       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:41:21.154159       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:41:21.154303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-385240_acf47112-9fa7-4021-9c0f-0021669b91bc!
	I0319 20:41:21.158148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"071ad68d-5ddb-4392-ba9f-ab05da3e1e3c", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-385240_acf47112-9fa7-4021-9c0f-0021669b91bc became leader
	I0319 20:41:21.254958       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-385240_acf47112-9fa7-4021-9c0f-0021669b91bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nv288
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 describe pod metrics-server-57f55c9bc5-nv288
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-385240 describe pod metrics-server-57f55c9bc5-nv288: exit status 1 (64.934152ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nv288" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-385240 describe pod metrics-server-57f55c9bc5-nv288: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0319 20:43:07.884682   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-414130 -n no-preload-414130
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-19 20:50:55.69549397 +0000 UTC m=+6379.816113744
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-414130 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-414130 logs -n 25: (2.126910138s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:33:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:33:00.489344   60008 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:33:00.489594   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489603   60008 out.go:304] Setting ErrFile to fd 2...
	I0319 20:33:00.489607   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489787   60008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:33:00.490297   60008 out.go:298] Setting JSON to false
	I0319 20:33:00.491188   60008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8078,"bootTime":1710872302,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:33:00.491245   60008 start.go:139] virtualization: kvm guest
	I0319 20:33:00.493588   60008 out.go:177] * [default-k8s-diff-port-385240] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:33:00.495329   60008 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:33:00.496506   60008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:33:00.495369   60008 notify.go:220] Checking for updates...
	I0319 20:33:00.499210   60008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:33:00.500494   60008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:33:00.501820   60008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:33:00.503200   60008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:33:00.504837   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:33:00.505191   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.505266   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.519674   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0319 20:33:00.520123   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.520634   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.520656   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.520945   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.521132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.521364   60008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:33:00.521629   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.521660   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.535764   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0319 20:33:00.536105   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.536564   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.536583   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.536890   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.537079   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.572160   60008 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:33:00.573517   60008 start.go:297] selected driver: kvm2
	I0319 20:33:00.573530   60008 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.573663   60008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:33:00.574335   60008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.574423   60008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:33:00.588908   60008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:33:00.589283   60008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:33:00.589354   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:33:00.589375   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:33:00.589419   60008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.589532   60008 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.591715   60008 out.go:177] * Starting "default-k8s-diff-port-385240" primary control-plane node in "default-k8s-diff-port-385240" cluster
	I0319 20:32:58.292485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:01.364553   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:00.593043   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:33:00.593084   60008 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:33:00.593094   60008 cache.go:56] Caching tarball of preloaded images
	I0319 20:33:00.593156   60008 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:33:00.593166   60008 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:33:00.593281   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:33:00.593454   60008 start.go:360] acquireMachinesLock for default-k8s-diff-port-385240: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:33:07.444550   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:10.516480   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:16.596485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:19.668501   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:25.748504   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:28.820525   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:34.900508   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:37.972545   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:44.052478   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:47.124492   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:53.204484   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:56.276536   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:02.356552   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:05.428529   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:11.508540   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:14.580485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:20.660521   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:23.732555   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:29.812516   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:32.884574   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:38.964472   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:42.036583   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:48.116547   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:51.188507   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:54.193037   59415 start.go:364] duration metric: took 3m51.108134555s to acquireMachinesLock for "embed-certs-421660"
	I0319 20:34:54.193108   59415 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:34:54.193120   59415 fix.go:54] fixHost starting: 
	I0319 20:34:54.193458   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:34:54.193487   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:34:54.208614   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0319 20:34:54.209078   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:34:54.209506   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:34:54.209527   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:34:54.209828   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:34:54.209992   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:34:54.210117   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:34:54.211626   59415 fix.go:112] recreateIfNeeded on embed-certs-421660: state=Stopped err=<nil>
	I0319 20:34:54.211661   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	W0319 20:34:54.211820   59415 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:34:54.213989   59415 out.go:177] * Restarting existing kvm2 VM for "embed-certs-421660" ...
	I0319 20:34:54.190431   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:34:54.190483   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.190783   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:34:54.190809   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.191021   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:34:54.192901   59019 machine.go:97] duration metric: took 4m37.398288189s to provisionDockerMachine
	I0319 20:34:54.192939   59019 fix.go:56] duration metric: took 4m37.41948201s for fixHost
	I0319 20:34:54.192947   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 4m37.419503815s
	W0319 20:34:54.192970   59019 start.go:713] error starting host: provision: host is not running
	W0319 20:34:54.193060   59019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0319 20:34:54.193071   59019 start.go:728] Will try again in 5 seconds ...
	I0319 20:34:54.215391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Start
	I0319 20:34:54.215559   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring networks are active...
	I0319 20:34:54.216249   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network default is active
	I0319 20:34:54.216543   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network mk-embed-certs-421660 is active
	I0319 20:34:54.216902   59415 main.go:141] libmachine: (embed-certs-421660) Getting domain xml...
	I0319 20:34:54.217595   59415 main.go:141] libmachine: (embed-certs-421660) Creating domain...
	I0319 20:34:55.407058   59415 main.go:141] libmachine: (embed-certs-421660) Waiting to get IP...
	I0319 20:34:55.407855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.408280   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.408343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.408247   60323 retry.go:31] will retry after 202.616598ms: waiting for machine to come up
	I0319 20:34:55.612753   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.613313   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.613341   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.613247   60323 retry.go:31] will retry after 338.618778ms: waiting for machine to come up
	I0319 20:34:55.953776   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.954230   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.954259   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.954164   60323 retry.go:31] will retry after 389.19534ms: waiting for machine to come up
	I0319 20:34:56.344417   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.344855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.344886   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.344822   60323 retry.go:31] will retry after 555.697854ms: waiting for machine to come up
	I0319 20:34:56.902547   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.902990   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.903017   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.902955   60323 retry.go:31] will retry after 702.649265ms: waiting for machine to come up
	I0319 20:34:57.606823   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:57.607444   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:57.607484   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:57.607388   60323 retry.go:31] will retry after 814.886313ms: waiting for machine to come up
	I0319 20:34:59.194634   59019 start.go:360] acquireMachinesLock for no-preload-414130: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:34:58.424559   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:58.425066   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:58.425088   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:58.425011   60323 retry.go:31] will retry after 948.372294ms: waiting for machine to come up
	I0319 20:34:59.375490   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:59.375857   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:59.375884   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:59.375809   60323 retry.go:31] will retry after 1.206453994s: waiting for machine to come up
	I0319 20:35:00.584114   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:00.584548   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:00.584572   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:00.584496   60323 retry.go:31] will retry after 1.200177378s: waiting for machine to come up
	I0319 20:35:01.786803   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:01.787139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:01.787167   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:01.787085   60323 retry.go:31] will retry after 1.440671488s: waiting for machine to come up
	I0319 20:35:03.229775   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:03.230179   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:03.230216   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:03.230146   60323 retry.go:31] will retry after 2.073090528s: waiting for machine to come up
	I0319 20:35:05.305427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:05.305904   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:05.305930   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:05.305859   60323 retry.go:31] will retry after 3.463824423s: waiting for machine to come up
	I0319 20:35:08.773517   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:08.773911   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:08.773938   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:08.773873   60323 retry.go:31] will retry after 4.159170265s: waiting for machine to come up
	I0319 20:35:12.937475   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937965   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has current primary IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937979   59415 main.go:141] libmachine: (embed-certs-421660) Found IP for machine: 192.168.50.108
	I0319 20:35:12.937987   59415 main.go:141] libmachine: (embed-certs-421660) Reserving static IP address...
	I0319 20:35:12.938372   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.938400   59415 main.go:141] libmachine: (embed-certs-421660) DBG | skip adding static IP to network mk-embed-certs-421660 - found existing host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"}
	I0319 20:35:12.938412   59415 main.go:141] libmachine: (embed-certs-421660) Reserved static IP address: 192.168.50.108
	I0319 20:35:12.938435   59415 main.go:141] libmachine: (embed-certs-421660) Waiting for SSH to be available...
	I0319 20:35:12.938448   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Getting to WaitForSSH function...
	I0319 20:35:12.940523   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.940897   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.940932   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.941037   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH client type: external
	I0319 20:35:12.941069   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa (-rw-------)
	I0319 20:35:12.941102   59415 main.go:141] libmachine: (embed-certs-421660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:12.941116   59415 main.go:141] libmachine: (embed-certs-421660) DBG | About to run SSH command:
	I0319 20:35:12.941128   59415 main.go:141] libmachine: (embed-certs-421660) DBG | exit 0
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:13.068386   59415 main.go:141] libmachine: (embed-certs-421660) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:13.068756   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetConfigRaw
	I0319 20:35:13.069421   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.071751   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072101   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.072133   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072393   59415 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:35:13.072557   59415 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:13.072574   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:13.072781   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.075005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.075369   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.075678   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075816   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075973   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.076134   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.076364   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.076382   59415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:13.188983   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:13.189017   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189291   59415 buildroot.go:166] provisioning hostname "embed-certs-421660"
	I0319 20:35:13.189319   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189503   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.191881   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192190   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.192210   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192389   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.192550   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192696   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192818   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.192989   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.193145   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.193159   59415 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-421660 && echo "embed-certs-421660" | sudo tee /etc/hostname
	I0319 20:35:13.326497   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-421660
	
	I0319 20:35:13.326524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.329344   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.329765   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.330179   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330372   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330547   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.330753   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.330928   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.330943   59415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-421660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-421660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-421660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:13.454265   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:13.454297   59415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:13.454320   59415 buildroot.go:174] setting up certificates
	I0319 20:35:13.454334   59415 provision.go:84] configureAuth start
	I0319 20:35:13.454348   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.454634   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.457258   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457692   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.457723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457834   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.460123   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460436   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.460463   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460587   59415 provision.go:143] copyHostCerts
	I0319 20:35:13.460643   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:13.460652   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:13.460719   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:13.460815   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:13.460822   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:13.460846   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:13.460917   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:13.460924   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:13.460945   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:13.461004   59415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.embed-certs-421660 san=[127.0.0.1 192.168.50.108 embed-certs-421660 localhost minikube]
	I0319 20:35:13.553348   59415 provision.go:177] copyRemoteCerts
	I0319 20:35:13.553399   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:13.553424   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.555729   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556036   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.556071   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556199   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.556406   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.556579   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.556725   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:13.642780   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:35:13.670965   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:13.698335   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:13.724999   59415 provision.go:87] duration metric: took 270.652965ms to configureAuth
	I0319 20:35:13.725022   59415 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:13.725174   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:13.725235   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.727653   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.727969   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.727988   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.728186   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.728410   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728581   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728783   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.728960   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.729113   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.729130   59415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:14.012527   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:14.012554   59415 machine.go:97] duration metric: took 939.982813ms to provisionDockerMachine
	I0319 20:35:14.012568   59415 start.go:293] postStartSetup for "embed-certs-421660" (driver="kvm2")
	I0319 20:35:14.012582   59415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:14.012616   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.012969   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:14.012996   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.015345   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015706   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.015759   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015864   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.016069   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.016269   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.016409   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.105236   59415 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:14.110334   59415 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:14.110363   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:14.110435   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:14.110534   59415 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:14.110623   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:14.120911   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:14.148171   59415 start.go:296] duration metric: took 135.590484ms for postStartSetup
	I0319 20:35:14.148209   59415 fix.go:56] duration metric: took 19.955089617s for fixHost
	I0319 20:35:14.148234   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.150788   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.151165   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151331   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.151514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151667   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151784   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.151953   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:14.152125   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:14.152138   59415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:14.265435   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880514.234420354
	
	I0319 20:35:14.265467   59415 fix.go:216] guest clock: 1710880514.234420354
	I0319 20:35:14.265478   59415 fix.go:229] Guest: 2024-03-19 20:35:14.234420354 +0000 UTC Remote: 2024-03-19 20:35:14.148214105 +0000 UTC m=+251.208119911 (delta=86.206249ms)
	I0319 20:35:14.265507   59415 fix.go:200] guest clock delta is within tolerance: 86.206249ms
	I0319 20:35:14.265516   59415 start.go:83] releasing machines lock for "embed-certs-421660", held for 20.072435424s
	I0319 20:35:14.265554   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.265868   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:14.268494   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268846   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.268874   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269589   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269751   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269833   59415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:14.269884   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.269956   59415 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:14.269972   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.272604   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272771   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272978   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273137   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273140   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273160   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273316   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273337   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273473   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273685   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.273738   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.358033   59415 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:14.385511   59415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:14.542052   59415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:14.549672   59415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:14.549747   59415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:14.569110   59415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:14.569137   59415 start.go:494] detecting cgroup driver to use...
	I0319 20:35:14.569193   59415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:14.586644   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:14.601337   59415 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:14.601407   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:14.616158   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:14.631754   59415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:14.746576   59415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:14.902292   59415 docker.go:233] disabling docker service ...
	I0319 20:35:14.902353   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:14.920787   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:14.938865   59415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:15.078791   59415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:15.214640   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:15.242992   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:15.264698   59415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:15.264755   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.276750   59415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:15.276817   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.288643   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.300368   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.318906   59415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:15.338660   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.351908   59415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.372022   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.384124   59415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:15.395206   59415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:15.395268   59415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:15.411193   59415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:15.422031   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:15.572313   59415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:15.730316   59415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:15.730389   59415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:15.738539   59415 start.go:562] Will wait 60s for crictl version
	I0319 20:35:15.738600   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:35:15.743107   59415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:15.788582   59415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:15.788666   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.819444   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.859201   59415 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:15.860492   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:15.863655   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864032   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:15.864063   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864303   59415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:15.870600   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:15.885694   59415 kubeadm.go:877] updating cluster {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:15.885833   59415 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:15.885890   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:15.924661   59415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:15.924736   59415 ssh_runner.go:195] Run: which lz4
	I0319 20:35:15.929595   59415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:15.934980   59415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:15.935014   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:17.673355   59415 crio.go:462] duration metric: took 1.743798593s to copy over tarball
	I0319 20:35:17.673428   59415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:20.169596   59415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49612841s)
	I0319 20:35:20.169629   59415 crio.go:469] duration metric: took 2.496240167s to extract the tarball
	I0319 20:35:20.169639   59415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:20.208860   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:20.261040   59415 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:35:20.261063   59415 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:35:20.261071   59415 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.29.3 crio true true} ...
	I0319 20:35:20.261162   59415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-421660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:20.261227   59415 ssh_runner.go:195] Run: crio config
	I0319 20:35:20.311322   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:20.311346   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:20.311359   59415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:20.311377   59415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-421660 NodeName:embed-certs-421660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:35:20.311501   59415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-421660"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:20.311560   59415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:35:20.323700   59415 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:20.323776   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:20.334311   59415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0319 20:35:20.352833   59415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:20.372914   59415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0319 20:35:20.391467   59415 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:20.395758   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:20.408698   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:20.532169   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:20.550297   59415 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660 for IP: 192.168.50.108
	I0319 20:35:20.550320   59415 certs.go:194] generating shared ca certs ...
	I0319 20:35:20.550339   59415 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:20.550507   59415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:20.550574   59415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:20.550586   59415 certs.go:256] generating profile certs ...
	I0319 20:35:20.550700   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/client.key
	I0319 20:35:20.550774   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key.e5ca10b2
	I0319 20:35:20.550824   59415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key
	I0319 20:35:20.550954   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:20.550988   59415 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:20.551001   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:20.551037   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:20.551070   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:20.551101   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:20.551155   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:20.552017   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:20.583444   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:20.616935   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:20.673499   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:20.707988   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0319 20:35:20.734672   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:35:20.761302   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:20.792511   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:20.819903   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:20.848361   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:20.878230   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:20.908691   59415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:20.930507   59415 ssh_runner.go:195] Run: openssl version
	I0319 20:35:20.937088   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:20.949229   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954299   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954343   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.960610   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:20.972162   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:20.984137   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989211   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989273   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.995436   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:21.007076   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:21.018552   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024109   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024146   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.030344   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:21.041615   59415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:21.046986   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:21.053533   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:21.060347   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:21.067155   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:21.074006   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:21.080978   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:21.087615   59415 kubeadm.go:391] StartCluster: {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:21.087695   59415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:21.087745   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.131217   59415 cri.go:89] found id: ""
	I0319 20:35:21.131294   59415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:21.143460   59415 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:21.143487   59415 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:21.143493   59415 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:21.143545   59415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:21.156145   59415 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:21.157080   59415 kubeconfig.go:125] found "embed-certs-421660" server: "https://192.168.50.108:8443"
	I0319 20:35:21.158865   59415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:21.171515   59415 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.108
	I0319 20:35:21.171551   59415 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:21.171561   59415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:21.171607   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.221962   59415 cri.go:89] found id: ""
	I0319 20:35:21.222028   59415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:21.239149   59415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:21.250159   59415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:21.250185   59415 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:21.250242   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:21.260035   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:21.260107   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:21.270804   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:21.281041   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:21.281106   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:21.291796   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.301883   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:21.301943   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.313038   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:21.323390   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:21.323462   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:21.333893   59415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:21.344645   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:21.491596   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.349871   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.592803   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.670220   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.802978   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:22.803071   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:23.303983   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.803254   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.846475   59415 api_server.go:72] duration metric: took 1.043496842s to wait for apiserver process to appear ...
	I0319 20:35:23.846509   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:35:23.846532   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:23.847060   59415 api_server.go:269] stopped: https://192.168.50.108:8443/healthz: Get "https://192.168.50.108:8443/healthz": dial tcp 192.168.50.108:8443: connect: connection refused
	I0319 20:35:24.347376   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.456794   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.456826   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.456841   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.492793   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.492827   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.847365   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.857297   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:26.857327   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.346936   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.351748   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:27.351775   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.847430   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.852157   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:35:27.868953   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:35:27.869006   59415 api_server.go:131] duration metric: took 4.022477349s to wait for apiserver health ...
	I0319 20:35:27.869019   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:27.869029   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:27.871083   59415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:35:27.872669   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:35:27.886256   59415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:35:27.912891   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:35:27.928055   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:35:27.928088   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:35:27.928095   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:35:27.928102   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:35:27.928108   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:35:27.928113   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:35:27.928118   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:35:27.928126   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:35:27.928130   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:35:27.928136   59415 system_pods.go:74] duration metric: took 15.221738ms to wait for pod list to return data ...
	I0319 20:35:27.928142   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:35:27.931854   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:35:27.931876   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:35:27.931888   59415 node_conditions.go:105] duration metric: took 3.74189ms to run NodePressure ...
	I0319 20:35:27.931903   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:28.209912   59415 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215315   59415 kubeadm.go:733] kubelet initialised
	I0319 20:35:28.215343   59415 kubeadm.go:734] duration metric: took 5.403708ms waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215353   59415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:28.221636   59415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.230837   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230868   59415 pod_ready.go:81] duration metric: took 9.198177ms for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.230878   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230887   59415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.237452   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237472   59415 pod_ready.go:81] duration metric: took 6.569363ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.237479   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237485   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.242902   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242919   59415 pod_ready.go:81] duration metric: took 5.427924ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.242926   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242931   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.316859   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316889   59415 pod_ready.go:81] duration metric: took 73.950437ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.316901   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316908   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.717107   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717133   59415 pod_ready.go:81] duration metric: took 400.215265ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.717143   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717151   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.117365   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117403   59415 pod_ready.go:81] duration metric: took 400.242952ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.117416   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117427   59415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.517914   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517950   59415 pod_ready.go:81] duration metric: took 400.512217ms for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.517962   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517974   59415 pod_ready.go:38] duration metric: took 1.302609845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:29.518009   59415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:35:29.534665   59415 ops.go:34] apiserver oom_adj: -16
	I0319 20:35:29.534686   59415 kubeadm.go:591] duration metric: took 8.39118752s to restartPrimaryControlPlane
	I0319 20:35:29.534697   59415 kubeadm.go:393] duration metric: took 8.447087595s to StartCluster
	I0319 20:35:29.534713   59415 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.534814   59415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:29.536379   59415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.536620   59415 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:35:29.538397   59415 out.go:177] * Verifying Kubernetes components...
	I0319 20:35:29.536707   59415 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:35:29.536837   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:29.539696   59415 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-421660"
	I0319 20:35:29.539709   59415 addons.go:69] Setting metrics-server=true in profile "embed-certs-421660"
	I0319 20:35:29.539739   59415 addons.go:234] Setting addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:29.539747   59415 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-421660"
	W0319 20:35:29.539751   59415 addons.go:243] addon metrics-server should already be in state true
	W0319 20:35:29.539757   59415 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:35:29.539782   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539786   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539700   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:29.539700   59415 addons.go:69] Setting default-storageclass=true in profile "embed-certs-421660"
	I0319 20:35:29.539882   59415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-421660"
	I0319 20:35:29.540079   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540098   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540107   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540120   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540243   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540282   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.554668   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0319 20:35:29.554742   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0319 20:35:29.554815   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0319 20:35:29.555109   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555148   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555220   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555703   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555708   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555722   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555726   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555828   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555847   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.556077   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556206   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556273   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.556627   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556669   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.556753   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556787   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.559109   59415 addons.go:234] Setting addon default-storageclass=true in "embed-certs-421660"
	W0319 20:35:29.559126   59415 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:35:29.559150   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.559390   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.559425   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.570567   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0319 20:35:29.571010   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.571467   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.571492   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.571831   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.572018   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.573621   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.575889   59415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:35:29.574300   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0319 20:35:29.574529   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0319 20:35:29.577448   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:35:29.577473   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:35:29.577496   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.577913   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.577957   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.578350   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578382   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.578751   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.578877   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578901   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.579318   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.579431   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.579495   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.579509   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.580582   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581050   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.581074   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581166   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.581276   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.583314   59415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:29.581522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.584941   59415 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.584951   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:35:29.584963   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.584980   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.585154   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.587700   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588076   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.588104   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588289   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.588463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.588614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.588791   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.594347   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0319 20:35:29.594626   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.595030   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.595062   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.595384   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.595524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.596984   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.597209   59415 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.597224   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:35:29.597238   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.599955   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.600457   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600533   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.600682   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.600829   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.600926   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.719989   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:29.737348   59415 node_ready.go:35] waiting up to 6m0s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:29.839479   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.839994   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:35:29.840016   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:35:29.852112   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.904335   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:35:29.904358   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:35:29.969646   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:29.969675   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:35:30.031528   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:31.120085   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.280572793s)
	I0319 20:35:31.120135   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120148   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120172   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268019206s)
	I0319 20:35:31.120214   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120229   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120430   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120448   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120457   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120544   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120564   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120588   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120606   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120758   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120788   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120827   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120833   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120841   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.127070   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.127085   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.127287   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.127301   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.138956   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.107385118s)
	I0319 20:35:31.139006   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139027   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139257   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139301   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139319   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139330   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139342   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139546   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139550   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139564   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139579   59415 addons.go:470] Verifying addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:31.141587   59415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:31.142927   59415 addons.go:505] duration metric: took 1.606231661s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:35:31.741584   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.101840   60008 start.go:364] duration metric: took 2m35.508355671s to acquireMachinesLock for "default-k8s-diff-port-385240"
	I0319 20:35:36.101908   60008 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:36.101921   60008 fix.go:54] fixHost starting: 
	I0319 20:35:36.102308   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:36.102352   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:36.118910   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0319 20:35:36.119363   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:36.119926   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:35:36.119957   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:36.120271   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:36.120450   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:36.120614   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:35:36.122085   60008 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385240: state=Stopped err=<nil>
	I0319 20:35:36.122112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	W0319 20:35:36.122284   60008 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:36.124242   60008 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385240" ...
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:33.742461   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.241937   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.743420   59415 node_ready.go:49] node "embed-certs-421660" has status "Ready":"True"
	I0319 20:35:36.743447   59415 node_ready.go:38] duration metric: took 7.006070851s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:36.743458   59415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:36.749810   59415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:36.125778   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Start
	I0319 20:35:36.125974   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring networks are active...
	I0319 20:35:36.126542   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network default is active
	I0319 20:35:36.126934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network mk-default-k8s-diff-port-385240 is active
	I0319 20:35:36.127367   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Getting domain xml...
	I0319 20:35:36.128009   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Creating domain...
	I0319 20:35:37.396589   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting to get IP...
	I0319 20:35:37.397626   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398211   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398294   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.398203   60655 retry.go:31] will retry after 263.730992ms: waiting for machine to come up
	I0319 20:35:37.663811   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664345   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664379   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.664300   60655 retry.go:31] will retry after 308.270868ms: waiting for machine to come up
	I0319 20:35:37.974625   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975061   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975095   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.975027   60655 retry.go:31] will retry after 376.884777ms: waiting for machine to come up
	I0319 20:35:38.353624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354101   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354129   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.354056   60655 retry.go:31] will retry after 419.389718ms: waiting for machine to come up
	I0319 20:35:38.774777   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775271   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775299   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.775224   60655 retry.go:31] will retry after 757.534448ms: waiting for machine to come up
	I0319 20:35:39.534258   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534739   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534766   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:39.534698   60655 retry.go:31] will retry after 921.578914ms: waiting for machine to come up
	I0319 20:35:40.457637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458154   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:40.458092   60655 retry.go:31] will retry after 1.079774724s: waiting for machine to come up
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:38.759504   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.258580   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.539643   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540163   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:41.540113   60655 retry.go:31] will retry after 1.174814283s: waiting for machine to come up
	I0319 20:35:42.716195   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716547   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716576   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:42.716510   60655 retry.go:31] will retry after 1.464439025s: waiting for machine to come up
	I0319 20:35:44.183190   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183673   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183701   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:44.183628   60655 retry.go:31] will retry after 2.304816358s: waiting for machine to come up
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:43.756917   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:44.893540   59415 pod_ready.go:92] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.893576   59415 pod_ready.go:81] duration metric: took 8.143737931s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.893592   59415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903602   59415 pod_ready.go:92] pod "etcd-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.903640   59415 pod_ready.go:81] duration metric: took 10.03087ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903653   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926651   59415 pod_ready.go:92] pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.926682   59415 pod_ready.go:81] duration metric: took 23.020281ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926696   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935080   59415 pod_ready.go:92] pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.935113   59415 pod_ready.go:81] duration metric: took 8.409239ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935126   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947241   59415 pod_ready.go:92] pod "kube-proxy-qvn26" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.947269   59415 pod_ready.go:81] duration metric: took 12.135421ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947280   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155416   59415 pod_ready.go:92] pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:45.155441   59415 pod_ready.go:81] duration metric: took 208.152938ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155460   59415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:47.165059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:46.490600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491092   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491121   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:46.491050   60655 retry.go:31] will retry after 2.347371858s: waiting for machine to come up
	I0319 20:35:48.841516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.841995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.842018   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:48.841956   60655 retry.go:31] will retry after 2.70576525s: waiting for machine to come up
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.662363   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.662389   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.549431   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549931   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549959   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:51.549900   60655 retry.go:31] will retry after 3.429745322s: waiting for machine to come up
	I0319 20:35:54.983382   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.983875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Found IP for machine: 192.168.39.77
	I0319 20:35:54.983908   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserving static IP address...
	I0319 20:35:54.983923   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has current primary IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.984212   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.984240   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserved static IP address: 192.168.39.77
	I0319 20:35:54.984292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | skip adding static IP to network mk-default-k8s-diff-port-385240 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"}
	I0319 20:35:54.984307   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for SSH to be available...
	I0319 20:35:54.984322   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Getting to WaitForSSH function...
	I0319 20:35:54.986280   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986591   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.986624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986722   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH client type: external
	I0319 20:35:54.986752   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa (-rw-------)
	I0319 20:35:54.986783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:54.986796   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | About to run SSH command:
	I0319 20:35:54.986805   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | exit 0
	I0319 20:35:55.112421   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:55.112825   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetConfigRaw
	I0319 20:35:55.113456   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.115976   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116349   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.116377   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116587   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:35:55.116847   60008 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:55.116874   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:55.117099   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.119475   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.119911   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.119947   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.120112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.120312   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120478   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120629   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.120793   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.120970   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.120982   60008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:55.229055   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:55.229090   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229360   60008 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385240"
	I0319 20:35:55.229390   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229594   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.232039   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232371   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.232391   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232574   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.232746   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232866   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232967   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.233087   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.233251   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.233264   60008 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385240 && echo "default-k8s-diff-port-385240" | sudo tee /etc/hostname
	I0319 20:35:55.355708   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385240
	
	I0319 20:35:55.355732   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.358292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358610   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.358641   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358880   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.359105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359267   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359415   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.359545   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.359701   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.359724   60008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:55.479083   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:55.479109   60008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:55.479126   60008 buildroot.go:174] setting up certificates
	I0319 20:35:55.479134   60008 provision.go:84] configureAuth start
	I0319 20:35:55.479143   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.479433   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.482040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482378   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.482408   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482535   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.484637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485035   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.485062   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485212   60008 provision.go:143] copyHostCerts
	I0319 20:35:55.485272   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:55.485283   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:55.485334   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:55.485425   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:55.485434   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:55.485454   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:55.485560   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:55.485569   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:55.485586   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:55.485642   60008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385240 san=[127.0.0.1 192.168.39.77 default-k8s-diff-port-385240 localhost minikube]
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.449710   59019 start.go:364] duration metric: took 57.255031003s to acquireMachinesLock for "no-preload-414130"
	I0319 20:35:56.449774   59019 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:56.449786   59019 fix.go:54] fixHost starting: 
	I0319 20:35:56.450187   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:56.450225   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:56.469771   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0319 20:35:56.470265   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:56.470764   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:35:56.470799   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:56.471187   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:56.471362   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:35:56.471545   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:35:56.473295   59019 fix.go:112] recreateIfNeeded on no-preload-414130: state=Stopped err=<nil>
	I0319 20:35:56.473323   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	W0319 20:35:56.473480   59019 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:56.475296   59019 out.go:177] * Restarting existing kvm2 VM for "no-preload-414130" ...
	I0319 20:35:56.476767   59019 main.go:141] libmachine: (no-preload-414130) Calling .Start
	I0319 20:35:56.476947   59019 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:35:56.477657   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:35:56.478036   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:35:56.478443   59019 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:35:56.479131   59019 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:35:53.663220   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:56.163557   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:55.738705   60008 provision.go:177] copyRemoteCerts
	I0319 20:35:55.738779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:55.738812   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.741292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741618   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.741644   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741835   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.741997   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.742105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.742260   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:55.828017   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:55.854341   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0319 20:35:55.881167   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:55.906768   60008 provision.go:87] duration metric: took 427.621358ms to configureAuth
	I0319 20:35:55.906795   60008 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:55.907007   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:55.907097   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.909518   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.909834   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.909863   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.910008   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.910193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910492   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.910670   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.910835   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.910849   60008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:56.207010   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:56.207036   60008 machine.go:97] duration metric: took 1.090170805s to provisionDockerMachine
	I0319 20:35:56.207049   60008 start.go:293] postStartSetup for "default-k8s-diff-port-385240" (driver="kvm2")
	I0319 20:35:56.207066   60008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:56.207086   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.207410   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:56.207435   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.210075   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210494   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.210526   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210671   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.210828   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.211016   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.211167   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.295687   60008 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:56.300508   60008 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:56.300531   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:56.300601   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:56.300677   60008 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:56.300779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:56.310829   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:56.337456   60008 start.go:296] duration metric: took 130.396402ms for postStartSetup
	I0319 20:35:56.337492   60008 fix.go:56] duration metric: took 20.235571487s for fixHost
	I0319 20:35:56.337516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.339907   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340361   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.340388   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340552   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.340749   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.340888   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.341040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.341198   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:56.341357   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:56.341367   60008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:56.449557   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880556.425761325
	
	I0319 20:35:56.449580   60008 fix.go:216] guest clock: 1710880556.425761325
	I0319 20:35:56.449587   60008 fix.go:229] Guest: 2024-03-19 20:35:56.425761325 +0000 UTC Remote: 2024-03-19 20:35:56.337496936 +0000 UTC m=+175.893119280 (delta=88.264389ms)
	I0319 20:35:56.449619   60008 fix.go:200] guest clock delta is within tolerance: 88.264389ms
	I0319 20:35:56.449624   60008 start.go:83] releasing machines lock for "default-k8s-diff-port-385240", held for 20.347739998s
	I0319 20:35:56.449647   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.449915   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:56.452764   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453172   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.453204   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453363   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.453973   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454275   60008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:56.454328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.454443   60008 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:56.454466   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.457060   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457284   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457383   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457418   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457536   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457555   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457567   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457831   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457977   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458126   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458139   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.458282   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.537675   60008 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:56.564279   60008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:56.708113   60008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:56.716216   60008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:56.716301   60008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:56.738625   60008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:56.738643   60008 start.go:494] detecting cgroup driver to use...
	I0319 20:35:56.738707   60008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:56.756255   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:56.772725   60008 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:56.772785   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:56.793261   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:56.812368   60008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:56.948137   60008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:57.139143   60008 docker.go:233] disabling docker service ...
	I0319 20:35:57.139212   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:57.156414   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:57.173655   60008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:57.313924   60008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:57.459539   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:57.478913   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:57.506589   60008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:57.506663   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.520813   60008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:57.520871   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.534524   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.547833   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.568493   60008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:57.582367   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.595859   60008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.616441   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.633329   60008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:57.648803   60008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:57.648886   60008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:57.667845   60008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:57.680909   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:57.825114   60008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:57.996033   60008 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:57.996118   60008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:58.001875   60008 start.go:562] Will wait 60s for crictl version
	I0319 20:35:58.001947   60008 ssh_runner.go:195] Run: which crictl
	I0319 20:35:58.006570   60008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:58.060545   60008 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:58.060628   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.104858   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.148992   60008 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:58.150343   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:58.153222   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153634   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:58.153663   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153924   60008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:58.158830   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:58.174622   60008 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:58.174760   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:58.174819   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:58.220802   60008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:58.220879   60008 ssh_runner.go:195] Run: which lz4
	I0319 20:35:58.225914   60008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:58.230673   60008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:58.230702   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:59.959612   60008 crio.go:462] duration metric: took 1.733738299s to copy over tarball
	I0319 20:35:59.959694   60008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.782684   59019 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:35:57.783613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:57.784088   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:57.784180   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:57.784077   60806 retry.go:31] will retry after 304.011729ms: waiting for machine to come up
	I0319 20:35:58.089864   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.090398   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.090431   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.090325   60806 retry.go:31] will retry after 268.702281ms: waiting for machine to come up
	I0319 20:35:58.360743   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.361173   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.361201   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.361116   60806 retry.go:31] will retry after 373.34372ms: waiting for machine to come up
	I0319 20:35:58.735810   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.736490   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.736518   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.736439   60806 retry.go:31] will retry after 588.9164ms: waiting for machine to come up
	I0319 20:35:59.327363   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.327908   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.327938   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.327881   60806 retry.go:31] will retry after 623.38165ms: waiting for machine to come up
	I0319 20:35:59.952641   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.953108   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.953138   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.953090   60806 retry.go:31] will retry after 896.417339ms: waiting for machine to come up
	I0319 20:36:00.851032   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:00.851485   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:00.851514   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:00.851435   60806 retry.go:31] will retry after 869.189134ms: waiting for machine to come up
	I0319 20:35:58.168341   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:00.664629   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:02.594104   60008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634373226s)
	I0319 20:36:02.594140   60008 crio.go:469] duration metric: took 2.634502157s to extract the tarball
	I0319 20:36:02.594149   60008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:36:02.635454   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:02.692442   60008 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:36:02.692468   60008 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:36:02.692477   60008 kubeadm.go:928] updating node { 192.168.39.77 8444 v1.29.3 crio true true} ...
	I0319 20:36:02.692613   60008 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-385240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:02.692697   60008 ssh_runner.go:195] Run: crio config
	I0319 20:36:02.749775   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:02.749798   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:02.749809   60008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:02.749828   60008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385240 NodeName:default-k8s-diff-port-385240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:02.749967   60008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-385240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:02.750034   60008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:36:02.760788   60008 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:02.760843   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:02.770999   60008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0319 20:36:02.789881   60008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:36:02.809005   60008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:36:02.831122   60008 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:02.835609   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:02.850186   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:02.990032   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:03.013831   60008 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240 for IP: 192.168.39.77
	I0319 20:36:03.013858   60008 certs.go:194] generating shared ca certs ...
	I0319 20:36:03.013879   60008 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:03.014072   60008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:03.014125   60008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:03.014137   60008 certs.go:256] generating profile certs ...
	I0319 20:36:03.014256   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.key
	I0319 20:36:03.014325   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key.5c19d013
	I0319 20:36:03.014389   60008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key
	I0319 20:36:03.014549   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:03.014602   60008 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:03.014626   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:03.014658   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:03.014691   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:03.014728   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:03.014793   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:03.015673   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:03.070837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:03.115103   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:03.150575   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:03.210934   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 20:36:03.254812   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:36:03.286463   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:03.315596   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:03.347348   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:03.375837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:03.407035   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:03.439726   60008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:03.461675   60008 ssh_runner.go:195] Run: openssl version
	I0319 20:36:03.468238   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:03.482384   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487682   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487739   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.494591   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:03.509455   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:03.522545   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527556   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527617   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.533925   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:03.546851   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:03.559553   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564547   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564595   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.570824   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:03.584339   60008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:03.589542   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:03.595870   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:03.602530   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:03.609086   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:03.615621   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:03.622477   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:03.629097   60008 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:03.629186   60008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:03.629234   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.674484   60008 cri.go:89] found id: ""
	I0319 20:36:03.674568   60008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:03.686995   60008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:03.687020   60008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:03.687026   60008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:03.687094   60008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:03.702228   60008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:03.703334   60008 kubeconfig.go:125] found "default-k8s-diff-port-385240" server: "https://192.168.39.77:8444"
	I0319 20:36:03.705508   60008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:03.719948   60008 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.77
	I0319 20:36:03.719985   60008 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:03.719997   60008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:03.720073   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.761557   60008 cri.go:89] found id: ""
	I0319 20:36:03.761619   60008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:03.781849   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:03.793569   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:03.793601   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:03.793652   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:36:03.804555   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:03.804605   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:03.816728   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:36:03.828247   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:03.828318   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:03.840814   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.853100   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:03.853168   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.867348   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:36:03.879879   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:03.879944   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:03.893810   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:03.906056   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:04.038911   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.173514   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134566983s)
	I0319 20:36:05.173547   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.395951   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.480821   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.721671   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:01.722186   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:01.722212   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:01.722142   60806 retry.go:31] will retry after 997.299446ms: waiting for machine to come up
	I0319 20:36:02.720561   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:02.721007   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:02.721037   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:02.720958   60806 retry.go:31] will retry after 1.64420318s: waiting for machine to come up
	I0319 20:36:04.367668   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:04.368140   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:04.368179   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:04.368083   60806 retry.go:31] will retry after 1.972606192s: waiting for machine to come up
	I0319 20:36:06.342643   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:06.343192   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:06.343236   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:06.343136   60806 retry.go:31] will retry after 2.056060208s: waiting for machine to come up
	I0319 20:36:03.164447   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.665089   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.581797   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:05.581879   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.082565   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.582872   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.628756   60008 api_server.go:72] duration metric: took 1.046965637s to wait for apiserver process to appear ...
	I0319 20:36:06.628786   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:06.628808   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:06.629340   60008 api_server.go:269] stopped: https://192.168.39.77:8444/healthz: Get "https://192.168.39.77:8444/healthz": dial tcp 192.168.39.77:8444: connect: connection refused
	I0319 20:36:07.128890   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.231991   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.232024   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.232039   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.280784   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.280820   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.629356   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.660326   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:09.660434   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.128936   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.139305   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:10.139336   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.629187   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.635922   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:36:10.654111   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:36:10.654137   60008 api_server.go:131] duration metric: took 4.025345365s to wait for apiserver health ...
	I0319 20:36:10.654146   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:10.654154   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:10.656104   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.401478   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:08.402086   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:08.402111   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:08.402001   60806 retry.go:31] will retry after 2.487532232s: waiting for machine to come up
	I0319 20:36:10.891005   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:10.891550   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:10.891591   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:10.891503   60806 retry.go:31] will retry after 3.741447035s: waiting for machine to come up
	I0319 20:36:08.163468   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.165537   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:12.661667   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.657654   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:10.672795   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:10.715527   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:10.728811   60008 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:10.728850   60008 system_pods.go:61] "coredns-76f75df574-hsdk2" [319e5411-97e4-4021-80d0-b39195acb696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:10.728862   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [d10870b0-a0e1-47aa-baf9-07065c1d9142] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:10.728873   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [4925af1b-328f-42ee-b2ef-78b58fcbdd0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:10.728883   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [6dad1c39-3fbc-4364-9ed8-725c0f518191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:10.728889   60008 system_pods.go:61] "kube-proxy-bwj22" [9cc86566-612e-48bc-94c9-a2dad6978c92] Running
	I0319 20:36:10.728896   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [e9c38443-ea8c-4590-94ca-61077f850b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:10.728904   60008 system_pods.go:61] "metrics-server-57f55c9bc5-ddl2q" [ecb174e4-18b0-459e-afb1-137a1f6bdd67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:10.728919   60008 system_pods.go:61] "storage-provisioner" [95fb27b5-769c-4420-8021-3d97942c9f42] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0319 20:36:10.728931   60008 system_pods.go:74] duration metric: took 13.321799ms to wait for pod list to return data ...
	I0319 20:36:10.728944   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:10.743270   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:10.743312   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:10.743326   60008 node_conditions.go:105] duration metric: took 14.37332ms to run NodePressure ...
	I0319 20:36:10.743348   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:11.028786   60008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034096   60008 kubeadm.go:733] kubelet initialised
	I0319 20:36:11.034115   60008 kubeadm.go:734] duration metric: took 5.302543ms waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034122   60008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:11.040118   60008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.046021   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046048   60008 pod_ready.go:81] duration metric: took 5.906752ms for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.046060   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046069   60008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.051677   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051700   60008 pod_ready.go:81] duration metric: took 5.61463ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.051712   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051721   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.057867   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057893   60008 pod_ready.go:81] duration metric: took 6.163114ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.057905   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057912   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:13.065761   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.634526   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:14.635125   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:14.635155   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:14.635074   60806 retry.go:31] will retry after 3.841866145s: waiting for machine to come up
	I0319 20:36:14.662669   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.664913   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:15.565340   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:17.567623   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:19.570775   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.479276   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.479810   59019 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:36:18.479836   59019 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:36:18.479852   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.480232   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.480279   59019 main.go:141] libmachine: (no-preload-414130) DBG | skip adding static IP to network mk-no-preload-414130 - found existing host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"}
	I0319 20:36:18.480297   59019 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:36:18.480319   59019 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:36:18.480336   59019 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:36:18.482725   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483025   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.483052   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483228   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:36:18.483262   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:36:18.483299   59019 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:36:18.483320   59019 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:36:18.483373   59019 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:36:18.612349   59019 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:36:18.612766   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:36:18.613495   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.616106   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616459   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.616498   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616729   59019 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:36:18.616940   59019 machine.go:94] provisionDockerMachine start ...
	I0319 20:36:18.616957   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:18.617150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.619316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619599   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.619620   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619750   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.619895   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620054   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620166   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.620339   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.620508   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.620521   59019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:36:18.729177   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:36:18.729203   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729483   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:36:18.729511   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729728   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.732330   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732633   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.732664   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.732944   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733087   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733211   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.733347   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.733513   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.733528   59019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:36:18.857142   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:36:18.857178   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.860040   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860434   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.860465   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860682   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.860907   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861102   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861283   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.861462   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.861661   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.861685   59019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:36:18.976726   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:36:18.976755   59019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:36:18.976776   59019 buildroot.go:174] setting up certificates
	I0319 20:36:18.976789   59019 provision.go:84] configureAuth start
	I0319 20:36:18.976803   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.977095   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.980523   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.980948   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.980976   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.981150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.983394   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983720   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.983741   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983887   59019 provision.go:143] copyHostCerts
	I0319 20:36:18.983949   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:36:18.983959   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:36:18.984009   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:36:18.984092   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:36:18.984099   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:36:18.984118   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:36:18.984224   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:36:18.984237   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:36:18.984284   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:36:18.984348   59019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:36:19.241365   59019 provision.go:177] copyRemoteCerts
	I0319 20:36:19.241422   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:36:19.241445   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.244060   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244362   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.244388   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244593   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.244781   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.244956   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.245125   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.332749   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:36:19.360026   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:36:19.386680   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:36:19.414673   59019 provision.go:87] duration metric: took 437.87318ms to configureAuth
	I0319 20:36:19.414697   59019 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:36:19.414893   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:36:19.414964   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.417627   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.417949   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.417974   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.418139   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.418351   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418513   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418687   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.418854   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.419099   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.419120   59019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:36:19.712503   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:36:19.712538   59019 machine.go:97] duration metric: took 1.095583423s to provisionDockerMachine
	I0319 20:36:19.712554   59019 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:36:19.712573   59019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:36:19.712595   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.712918   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:36:19.712953   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.715455   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715779   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.715813   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715917   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.716098   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.716307   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.716455   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.801402   59019 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:36:19.806156   59019 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:36:19.806181   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:36:19.806253   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:36:19.806330   59019 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:36:19.806451   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:36:19.818601   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:19.845698   59019 start.go:296] duration metric: took 133.131789ms for postStartSetup
	I0319 20:36:19.845728   59019 fix.go:56] duration metric: took 23.395944884s for fixHost
	I0319 20:36:19.845746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.848343   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848727   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.848760   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848909   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.849090   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849256   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849452   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.849667   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.849843   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.849853   59019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:36:19.957555   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880579.901731357
	
	I0319 20:36:19.957574   59019 fix.go:216] guest clock: 1710880579.901731357
	I0319 20:36:19.957581   59019 fix.go:229] Guest: 2024-03-19 20:36:19.901731357 +0000 UTC Remote: 2024-03-19 20:36:19.845732308 +0000 UTC m=+363.236094224 (delta=55.999049ms)
	I0319 20:36:19.957612   59019 fix.go:200] guest clock delta is within tolerance: 55.999049ms
	I0319 20:36:19.957625   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 23.507874645s
	I0319 20:36:19.957656   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.957889   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:19.960613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.960930   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.960957   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.961108   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961627   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961804   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961883   59019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:36:19.961930   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.961996   59019 ssh_runner.go:195] Run: cat /version.json
	I0319 20:36:19.962022   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.964593   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.964790   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965034   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965057   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965250   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965368   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965397   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965416   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965529   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965611   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965677   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965764   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.965788   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965893   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:20.041410   59019 ssh_runner.go:195] Run: systemctl --version
	I0319 20:36:20.067540   59019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:36:20.214890   59019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:36:20.222680   59019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:36:20.222735   59019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:36:20.239981   59019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:36:20.240003   59019 start.go:494] detecting cgroup driver to use...
	I0319 20:36:20.240066   59019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:36:20.260435   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:36:20.277338   59019 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:36:20.277398   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:36:20.294069   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:36:20.309777   59019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:36:20.443260   59019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:36:20.595476   59019 docker.go:233] disabling docker service ...
	I0319 20:36:20.595552   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:36:20.612622   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:36:20.627717   59019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:36:20.790423   59019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:36:20.915434   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:36:20.932043   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:36:20.953955   59019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:36:20.954026   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.966160   59019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:36:20.966230   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.978217   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.990380   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.002669   59019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:36:21.014880   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.026125   59019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.045239   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.056611   59019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:36:21.067763   59019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:36:21.067818   59019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:36:21.084054   59019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:36:21.095014   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:21.237360   59019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:36:21.396979   59019 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:36:21.397047   59019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:36:21.402456   59019 start.go:562] Will wait 60s for crictl version
	I0319 20:36:21.402509   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.406963   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:36:21.446255   59019 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:36:21.446351   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.477273   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.519196   59019 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:36:21.520520   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:21.523401   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.523792   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:21.523822   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.524033   59019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:36:21.528973   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:21.543033   59019 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:36:21.543154   59019 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:36:21.543185   59019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:21.583439   59019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:36:21.583472   59019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:36:21.583515   59019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.583551   59019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.583566   59019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.583610   59019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.583622   59019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.583646   59019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.583731   59019 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:36:21.583766   59019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585216   59019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585225   59019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.585236   59019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.585210   59019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.585247   59019 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:36:21.585253   59019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.585285   59019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.585297   59019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:19.163241   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:21.165282   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:22.071931   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:24.567506   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.567537   60008 pod_ready.go:81] duration metric: took 13.509614974s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.567553   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573414   60008 pod_ready.go:92] pod "kube-proxy-bwj22" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.573444   60008 pod_ready.go:81] duration metric: took 5.881434ms for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573457   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580429   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.580452   60008 pod_ready.go:81] duration metric: took 6.984808ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580463   60008 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.722682   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.727610   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:36:21.738933   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.740326   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.772871   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.801213   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.829968   59019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:36:21.830008   59019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.830053   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.832291   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.945513   59019 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:36:21.945558   59019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.945612   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945618   59019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:36:21.945651   59019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.945663   59019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:36:21.945687   59019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.945695   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945721   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970009   59019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:36:21.970052   59019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.970079   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.970090   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970100   59019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:36:21.970125   59019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.970149   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970177   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:22.062153   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:36:22.062260   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.063754   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.063840   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.091003   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091052   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:22.091104   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091335   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:22.091372   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0319 20:36:22.091382   59019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091405   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:36:22.091423   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0319 20:36:22.091426   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091475   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:22.096817   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0319 20:36:22.155139   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.155289   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.190022   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0319 20:36:22.190072   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.190166   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.507872   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445006   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.353551966s)
	I0319 20:36:26.445031   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0319 20:36:26.445049   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445063   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (4.289744726s)
	I0319 20:36:26.445095   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445099   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445107   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (4.254920134s)
	I0319 20:36:26.445135   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445176   59019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.937263856s)
	I0319 20:36:26.445228   59019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:36:26.445254   59019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445296   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:23.665322   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.167485   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.588550   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:29.088665   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.407117   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (1.96198659s)
	I0319 20:36:28.407156   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:36:28.407176   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407171   59019 ssh_runner.go:235] Completed: which crictl: (1.961850083s)
	I0319 20:36:28.407212   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407244   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:30.495567   59019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088296063s)
	I0319 20:36:30.495590   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.088358118s)
	I0319 20:36:30.495606   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:36:30.495617   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:36:30.495633   59019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495686   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495735   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:28.662588   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.163637   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.589581   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:34.090180   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.473194   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977482574s)
	I0319 20:36:32.473238   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:36:32.473263   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:32.473260   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.977498716s)
	I0319 20:36:32.473294   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0319 20:36:32.473311   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:34.927774   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.454440131s)
	I0319 20:36:34.927813   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:36:34.927842   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:34.927888   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:33.664608   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.163358   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.588459   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:38.590173   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.512011   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.584091271s)
	I0319 20:36:37.512048   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0319 20:36:37.512077   59019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:37.512134   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:38.589202   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.077040733s)
	I0319 20:36:38.589231   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0319 20:36:38.589263   59019 cache_images.go:123] Successfully loaded all cached images
	I0319 20:36:38.589278   59019 cache_images.go:92] duration metric: took 17.005785801s to LoadCachedImages
	I0319 20:36:38.589291   59019 kubeadm.go:928] updating node { 192.168.72.29 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:36:38.589415   59019 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-414130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:38.589495   59019 ssh_runner.go:195] Run: crio config
	I0319 20:36:38.648312   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:38.648334   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:38.648346   59019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:38.648366   59019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-414130 NodeName:no-preload-414130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:38.648494   59019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-414130"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:38.648554   59019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:36:38.665850   59019 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:38.665928   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:38.678211   59019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0319 20:36:38.701657   59019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:36:38.721498   59019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0319 20:36:38.741159   59019 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:38.745617   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:38.759668   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:38.896211   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:38.916698   59019 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130 for IP: 192.168.72.29
	I0319 20:36:38.916720   59019 certs.go:194] generating shared ca certs ...
	I0319 20:36:38.916748   59019 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:38.916888   59019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:38.916930   59019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:38.916943   59019 certs.go:256] generating profile certs ...
	I0319 20:36:38.917055   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.key
	I0319 20:36:38.917134   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key.2d7d554c
	I0319 20:36:38.917185   59019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key
	I0319 20:36:38.917324   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:38.917381   59019 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:38.917396   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:38.917434   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:38.917469   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:38.917501   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:38.917553   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:38.918130   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:38.959630   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:39.007656   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:39.046666   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:39.078901   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:36:39.116600   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:36:39.158517   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:39.188494   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:39.218770   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:39.247341   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:39.275816   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:39.303434   59019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:39.326445   59019 ssh_runner.go:195] Run: openssl version
	I0319 20:36:39.333373   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:39.346280   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352619   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352686   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.359796   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:39.372480   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:39.384231   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389760   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389818   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.396639   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:39.408887   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:39.421847   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427779   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427848   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.434447   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:39.446945   59019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:39.452219   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:39.458729   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:39.465298   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:39.471931   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:39.478810   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:39.485551   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:39.492084   59019 kubeadm.go:391] StartCluster: {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:39.492210   59019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:39.492297   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.535094   59019 cri.go:89] found id: ""
	I0319 20:36:39.535157   59019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:39.549099   59019 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:39.549123   59019 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:39.549129   59019 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:39.549179   59019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:39.560565   59019 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:39.561570   59019 kubeconfig.go:125] found "no-preload-414130" server: "https://192.168.72.29:8443"
	I0319 20:36:39.563750   59019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:39.578708   59019 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.29
	I0319 20:36:39.578746   59019 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:39.578756   59019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:39.578799   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.620091   59019 cri.go:89] found id: ""
	I0319 20:36:39.620152   59019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:39.639542   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:39.652115   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:39.652133   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:39.652190   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:36:39.664047   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:39.664114   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:39.675218   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:36:39.685482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:39.685533   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:39.695803   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.705482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:39.705538   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.715747   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:36:39.725260   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:39.725324   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:39.735246   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:39.745069   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:39.862945   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.548185   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.794369   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.891458   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.992790   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:40.992871   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.493489   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.164706   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:40.662753   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:42.663084   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.087924   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:43.087987   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.993208   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.040237   59019 api_server.go:72] duration metric: took 1.047447953s to wait for apiserver process to appear ...
	I0319 20:36:42.040278   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:42.040323   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:42.040927   59019 api_server.go:269] stopped: https://192.168.72.29:8443/healthz: Get "https://192.168.72.29:8443/healthz": dial tcp 192.168.72.29:8443: connect: connection refused
	I0319 20:36:42.541457   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.853765   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:44.853796   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:44.853834   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.967607   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:44.967648   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.040791   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.049359   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.049400   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.541024   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.545880   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.545907   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.041423   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.046075   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.046101   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.541147   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.546547   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.546587   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:44.664041   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.163545   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.040899   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.046413   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.046453   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:47.541051   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.547309   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.547334   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.040856   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.046293   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:48.046318   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.540858   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.545311   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:36:48.551941   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:36:48.551962   59019 api_server.go:131] duration metric: took 6.511678507s to wait for apiserver health ...
	I0319 20:36:48.551970   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:48.551976   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:48.553824   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:45.588011   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.589644   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:50.088130   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:48.555094   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:48.566904   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:48.592246   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:48.603249   59019 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:48.603277   59019 system_pods.go:61] "coredns-7db6d8ff4d-t42ph" [bc831304-6e17-452d-8059-22bb46bad525] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:48.603284   59019 system_pods.go:61] "etcd-no-preload-414130" [e2ac0f77-fade-4ac6-a472-58df4040a57d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:48.603294   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [1128c23f-0cc6-4cd4-aeed-32f3d4570e2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:48.603300   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [b03747b6-c3ed-44cf-bcc8-dc2cea408100] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:48.603304   59019 system_pods.go:61] "kube-proxy-dttkh" [23ac1cd6-588b-4745-9c0b-740f9f0e684c] Running
	I0319 20:36:48.603313   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [99fde84c-78d6-4c57-8889-c0d9f3b55a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:48.603318   59019 system_pods.go:61] "metrics-server-569cc877fc-jvlnl" [318246fd-b809-40fa-8aff-78eb33ea10fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:48.603322   59019 system_pods.go:61] "storage-provisioner" [80470118-b092-4ba1-b830-d6f13173434d] Running
	I0319 20:36:48.603327   59019 system_pods.go:74] duration metric: took 11.054488ms to wait for pod list to return data ...
	I0319 20:36:48.603336   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:48.606647   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:48.606667   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:48.606678   59019 node_conditions.go:105] duration metric: took 3.33741ms to run NodePressure ...
	I0319 20:36:48.606693   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:48.888146   59019 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898053   59019 kubeadm.go:733] kubelet initialised
	I0319 20:36:48.898073   59019 kubeadm.go:734] duration metric: took 9.903203ms waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898082   59019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:48.911305   59019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:50.918568   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:49.664061   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.162467   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.588174   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:55.088783   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:52.920852   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:54.419386   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.419410   59019 pod_ready.go:81] duration metric: took 5.508083852s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.419420   59019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926059   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.926081   59019 pod_ready.go:81] duration metric: took 506.65554ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926090   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930519   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.930538   59019 pod_ready.go:81] duration metric: took 4.441479ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930546   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.436969   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.436991   59019 pod_ready.go:81] duration metric: took 506.439126ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.437002   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443096   59019 pod_ready.go:92] pod "kube-proxy-dttkh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.443120   59019 pod_ready.go:81] duration metric: took 6.110267ms for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443132   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465091   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:56.465114   59019 pod_ready.go:81] duration metric: took 1.021974956s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465123   59019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.163556   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.663128   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:57.589188   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.093044   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:58.471804   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.478113   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:58.663274   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.663336   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.663834   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.588018   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.087994   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:02.973736   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.471180   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.161734   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.161995   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.091895   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.588324   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:07.971754   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.974080   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.162947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:11.661800   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.088585   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.588430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:12.475641   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.971849   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:13.662232   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:15.663770   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.588577   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.590602   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.972091   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.972471   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.473526   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.161717   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:20.166272   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:22.661804   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.088608   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:23.587236   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:23.975755   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.472094   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:24.663592   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.664475   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:25.589800   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.087868   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.088398   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:28.472705   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.972037   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:29.162647   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.662382   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:32.088569   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.587686   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:32.972676   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:35.471024   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.161665   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.169093   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.587810   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.590891   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:37.472396   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:39.472831   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.662719   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:40.663337   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:41.088396   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.089545   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.972560   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:44.471857   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.162059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.163324   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:47.662016   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.588501   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:48.087736   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:50.091413   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:46.972679   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.473552   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.665655   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.164565   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.587562   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.587929   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:51.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.471542   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.472616   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.663037   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.666932   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.588855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.087276   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:58.971607   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.474225   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.165581   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.169140   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.087715   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.092449   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.972924   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.473336   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.665054   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.666413   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.588698   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.087857   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.088761   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:08.475109   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.973956   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.162101   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.663715   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:12.588671   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.088453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:13.473479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.971583   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:13.162747   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.163005   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.662157   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.587779   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:19.588889   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:18.473455   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.473644   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.162290   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:22.167423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.589226   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.090617   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:22.473728   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.971753   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.663722   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.665702   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.589574   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.088379   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:38:26.972361   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.471757   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.473307   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:28.666420   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.162701   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.089336   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.587759   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.473519   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:35.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.665536   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.161727   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.087816   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.587570   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:38.471727   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.472995   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.162339   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.162741   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:42.661979   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.588023   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.088381   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:45.091312   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:42.474517   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.971377   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.662325   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.663603   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:47.587861   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:50.086555   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.975287   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.475667   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.164849   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:51.661947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.087983   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.588099   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:51.971311   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:53.971907   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:55.972358   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.162991   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.163316   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.588120   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:59.090000   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:57.973293   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.471215   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:58.662516   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.663859   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.588049   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.589283   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.471981   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:04.472495   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.164344   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:05.665651   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:06.086880   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.088337   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:06.971135   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.971957   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.473482   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.162486   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.662096   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:12.662841   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.587271   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:13.087563   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:15.088454   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:13.972462   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.471598   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:14.664028   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.665749   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.587968   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.087138   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.471693   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.473198   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:19.162423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.164242   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:22.087921   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.089243   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:22.971141   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.971943   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:23.663805   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.162590   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.587708   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.588720   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.972042   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.972160   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:30.973402   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.664060   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.161446   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.092087   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.588849   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.473205   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.971076   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.162170   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.164007   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:37.662820   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.588912   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:38.086910   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:40.089385   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:37.971651   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.973467   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.663277   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.162392   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.587250   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.589855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.472493   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.973064   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.162740   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:45.162232   59415 pod_ready.go:81] duration metric: took 4m0.006756965s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:39:45.162255   59415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:39:45.162262   59415 pod_ready.go:38] duration metric: took 4m8.418792568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:39:45.162277   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:39:45.162309   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.162363   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.219659   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:45.219685   59415 cri.go:89] found id: ""
	I0319 20:39:45.219694   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:45.219737   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.225012   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.225072   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.268783   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.268803   59415 cri.go:89] found id: ""
	I0319 20:39:45.268810   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:45.268875   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.273758   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.273813   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:45.316870   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.316893   59415 cri.go:89] found id: ""
	I0319 20:39:45.316901   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:45.316942   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.321910   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:45.321968   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:45.360077   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:45.360098   59415 cri.go:89] found id: ""
	I0319 20:39:45.360105   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:45.360157   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.365517   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:45.365580   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:45.407686   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:45.407704   59415 cri.go:89] found id: ""
	I0319 20:39:45.407711   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:45.407752   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.412894   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:45.412954   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:45.451930   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:45.451953   59415 cri.go:89] found id: ""
	I0319 20:39:45.451964   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:45.452009   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.456634   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:45.456699   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:45.498575   59415 cri.go:89] found id: ""
	I0319 20:39:45.498601   59415 logs.go:276] 0 containers: []
	W0319 20:39:45.498611   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:45.498619   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:45.498678   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:45.548381   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.548400   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.548405   59415 cri.go:89] found id: ""
	I0319 20:39:45.548411   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:45.548469   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.553470   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.558445   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:45.558471   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.603464   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:45.603490   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.650631   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:45.650663   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:45.668744   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:45.668775   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:45.823596   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:45.823625   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.891879   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:45.891911   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.944237   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:45.944284   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:46.005819   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:46.005848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:46.069819   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.069848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.648008   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.648051   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:46.701035   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.701073   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.753159   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:46.753189   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:46.804730   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:46.804767   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:47.087453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.088165   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:47.471171   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.472374   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:49.351186   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.368780   59415 api_server.go:72] duration metric: took 4m19.832131165s to wait for apiserver process to appear ...
	I0319 20:39:49.368806   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:39:49.368844   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:49.368913   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:49.408912   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:49.408937   59415 cri.go:89] found id: ""
	I0319 20:39:49.408947   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:49.409010   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.414194   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:49.414263   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:49.456271   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.456298   59415 cri.go:89] found id: ""
	I0319 20:39:49.456307   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:49.456374   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.461250   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:49.461316   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:49.510029   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:49.510052   59415 cri.go:89] found id: ""
	I0319 20:39:49.510061   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:49.510119   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.515604   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:49.515667   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:49.561004   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:49.561026   59415 cri.go:89] found id: ""
	I0319 20:39:49.561034   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:49.561100   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.566205   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:49.566276   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:49.610666   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:49.610685   59415 cri.go:89] found id: ""
	I0319 20:39:49.610693   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:49.610735   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.615683   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:49.615730   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:49.657632   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:49.657648   59415 cri.go:89] found id: ""
	I0319 20:39:49.657655   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:49.657711   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.662128   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:49.662172   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:49.699037   59415 cri.go:89] found id: ""
	I0319 20:39:49.699060   59415 logs.go:276] 0 containers: []
	W0319 20:39:49.699068   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:49.699074   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:49.699131   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:49.754331   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:49.754353   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:49.754359   59415 cri.go:89] found id: ""
	I0319 20:39:49.754368   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:49.754437   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.759210   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.763797   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:49.763816   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.818285   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:49.818314   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:49.946232   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:49.946266   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.994160   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:49.994186   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:50.042893   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:50.042923   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:50.099333   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:50.099362   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:50.547046   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:50.547082   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:50.593081   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:50.593111   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:50.632611   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:50.632643   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:50.689610   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:50.689641   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:50.707961   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:50.707997   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:50.752684   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:50.752713   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:50.790114   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:50.790139   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:51.089647   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.588183   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:39:51.972170   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:54.471260   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:56.472093   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.338254   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:39:53.343669   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:39:53.344796   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:39:53.344816   59415 api_server.go:131] duration metric: took 3.976004163s to wait for apiserver health ...
	I0319 20:39:53.344824   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:39:53.344854   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:53.344896   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:53.407914   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.407939   59415 cri.go:89] found id: ""
	I0319 20:39:53.407948   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:53.408000   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.414299   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:53.414360   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:53.466923   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:53.466944   59415 cri.go:89] found id: ""
	I0319 20:39:53.466953   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:53.467006   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.472181   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:53.472247   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:53.511808   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:53.511830   59415 cri.go:89] found id: ""
	I0319 20:39:53.511839   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:53.511900   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.517386   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:53.517445   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:53.560360   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:53.560383   59415 cri.go:89] found id: ""
	I0319 20:39:53.560390   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:53.560433   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.565131   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:53.565181   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:53.611243   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:53.611264   59415 cri.go:89] found id: ""
	I0319 20:39:53.611273   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:53.611326   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.616327   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:53.616391   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:53.656775   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:53.656794   59415 cri.go:89] found id: ""
	I0319 20:39:53.656801   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:53.656846   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.661915   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:53.661966   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:53.700363   59415 cri.go:89] found id: ""
	I0319 20:39:53.700389   59415 logs.go:276] 0 containers: []
	W0319 20:39:53.700396   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:53.700401   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:53.700454   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:53.750337   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:53.750357   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:53.750360   59415 cri.go:89] found id: ""
	I0319 20:39:53.750373   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:53.750426   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.755835   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.761078   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:53.761099   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:53.812898   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:53.812928   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:53.934451   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:53.934482   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.989117   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:53.989148   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:54.386028   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:54.386060   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:54.437864   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:54.437893   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:54.456559   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:54.456584   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:54.506564   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:54.506593   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:54.551120   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:54.551151   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:54.595768   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:54.595794   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:54.637715   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:54.637745   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:54.689666   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:54.689706   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:54.731821   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:54.731851   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:57.287839   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:39:57.287866   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.287870   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.287874   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.287878   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.287881   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.287884   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.287890   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.287894   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.287901   59415 system_pods.go:74] duration metric: took 3.943071923s to wait for pod list to return data ...
	I0319 20:39:57.287907   59415 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:39:57.290568   59415 default_sa.go:45] found service account: "default"
	I0319 20:39:57.290587   59415 default_sa.go:55] duration metric: took 2.674741ms for default service account to be created ...
	I0319 20:39:57.290594   59415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:39:57.296691   59415 system_pods.go:86] 8 kube-system pods found
	I0319 20:39:57.296710   59415 system_pods.go:89] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.296718   59415 system_pods.go:89] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.296722   59415 system_pods.go:89] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.296726   59415 system_pods.go:89] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.296730   59415 system_pods.go:89] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.296734   59415 system_pods.go:89] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.296741   59415 system_pods.go:89] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.296747   59415 system_pods.go:89] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.296753   59415 system_pods.go:126] duration metric: took 6.154905ms to wait for k8s-apps to be running ...
	I0319 20:39:57.296762   59415 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:39:57.296803   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:57.313729   59415 system_svc.go:56] duration metric: took 16.960151ms WaitForService to wait for kubelet
	I0319 20:39:57.313753   59415 kubeadm.go:576] duration metric: took 4m27.777105553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:39:57.313777   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:39:57.316765   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:39:57.316789   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:39:57.316803   59415 node_conditions.go:105] duration metric: took 3.021397ms to run NodePressure ...
	I0319 20:39:57.316813   59415 start.go:240] waiting for startup goroutines ...
	I0319 20:39:57.316820   59415 start.go:245] waiting for cluster config update ...
	I0319 20:39:57.316830   59415 start.go:254] writing updated cluster config ...
	I0319 20:39:57.317087   59415 ssh_runner.go:195] Run: rm -f paused
	I0319 20:39:57.365814   59415 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:39:57.368111   59415 out.go:177] * Done! kubectl is now configured to use "embed-certs-421660" cluster and "default" namespace by default
	I0319 20:39:56.088199   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.088480   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.091027   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.971917   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.972329   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:02.589430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.088313   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:03.474330   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.972928   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:07.587315   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:09.588829   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:08.471254   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:10.472963   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.087905   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:14.589786   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.973661   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:15.471559   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.087489   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.087559   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.473159   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.975538   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:21.090446   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:23.588215   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.581466   60008 pod_ready.go:81] duration metric: took 4m0.000988658s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:24.581495   60008 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:40:24.581512   60008 pod_ready.go:38] duration metric: took 4m13.547382951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:24.581535   60008 kubeadm.go:591] duration metric: took 4m20.894503953s to restartPrimaryControlPlane
	W0319 20:40:24.581583   60008 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:24.581611   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:22.472853   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.972183   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:26.973460   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:28.974127   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:31.475479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:33.973020   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:36.471909   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:38.473008   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:40.975638   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:43.473149   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:45.474566   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.972615   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:50.472593   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:52.973302   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:55.472067   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:56.465422   59019 pod_ready.go:81] duration metric: took 4m0.000285496s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:56.465453   59019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0319 20:40:56.465495   59019 pod_ready.go:38] duration metric: took 4m7.567400515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:56.465521   59019 kubeadm.go:591] duration metric: took 4m16.916387223s to restartPrimaryControlPlane
	W0319 20:40:56.465574   59019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:56.465604   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:56.963018   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.381377433s)
	I0319 20:40:56.963106   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:40:56.982252   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:40:56.994310   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:40:57.004950   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:40:57.004974   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:40:57.005018   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:40:57.015009   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:40:57.015070   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:40:57.026153   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:40:57.036560   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:40:57.036611   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:40:57.047469   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.060137   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:40:57.060188   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.073305   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:40:57.083299   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:40:57.083372   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:40:57.093788   60008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:40:57.352358   60008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:05.910387   60008 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:41:05.910460   60008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:05.910542   60008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:05.910660   60008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:05.910798   60008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:05.910903   60008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:05.912366   60008 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:05.912439   60008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:05.912493   60008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:05.912563   60008 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:05.912614   60008 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:05.912673   60008 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:05.912726   60008 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:05.912809   60008 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:05.912874   60008 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:05.912975   60008 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:05.913082   60008 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:05.913142   60008 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:05.913197   60008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:05.913258   60008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:05.913363   60008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:05.913439   60008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:05.913536   60008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:05.913616   60008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:05.913738   60008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:05.913841   60008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:05.915394   60008 out.go:204]   - Booting up control plane ...
	I0319 20:41:05.915486   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:05.915589   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:05.915682   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:05.915832   60008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:05.915951   60008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:05.916010   60008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:05.916154   60008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:05.916255   60008 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505433 seconds
	I0319 20:41:05.916392   60008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:05.916545   60008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:05.916628   60008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:05.916839   60008 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-385240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:05.916908   60008 kubeadm.go:309] [bootstrap-token] Using token: y9pq78.ls188thm3dr5dool
	I0319 20:41:05.918444   60008 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:05.918567   60008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:05.918654   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:05.918821   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:05.918999   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:05.919147   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:05.919260   60008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:05.919429   60008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:05.919498   60008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:05.919572   60008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:05.919582   60008 kubeadm.go:309] 
	I0319 20:41:05.919665   60008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:05.919678   60008 kubeadm.go:309] 
	I0319 20:41:05.919787   60008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:05.919799   60008 kubeadm.go:309] 
	I0319 20:41:05.919834   60008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:05.919929   60008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:05.920007   60008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:05.920017   60008 kubeadm.go:309] 
	I0319 20:41:05.920102   60008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:05.920112   60008 kubeadm.go:309] 
	I0319 20:41:05.920182   60008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:05.920191   60008 kubeadm.go:309] 
	I0319 20:41:05.920284   60008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:05.920411   60008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:05.920506   60008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:05.920520   60008 kubeadm.go:309] 
	I0319 20:41:05.920648   60008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:05.920762   60008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:05.920771   60008 kubeadm.go:309] 
	I0319 20:41:05.920901   60008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921063   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:05.921099   60008 kubeadm.go:309] 	--control-plane 
	I0319 20:41:05.921105   60008 kubeadm.go:309] 
	I0319 20:41:05.921207   60008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:05.921216   60008 kubeadm.go:309] 
	I0319 20:41:05.921285   60008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921386   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:05.921397   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:41:05.921403   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:05.922921   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:05.924221   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:05.941888   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:06.040294   60008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:06.040378   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.040413   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-385240 minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=default-k8s-diff-port-385240 minikube.k8s.io/primary=true
	I0319 20:41:06.104038   60008 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:06.266168   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.766345   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.266622   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.766418   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.266864   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.766777   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.266420   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.766319   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:10.266990   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:10.766714   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.266839   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.767222   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.266933   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.766390   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.266562   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.766618   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.267159   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.767010   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.266307   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.767002   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.266488   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.766567   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.266789   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.766935   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.266312   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.767202   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.904766   60008 kubeadm.go:1107] duration metric: took 12.864451937s to wait for elevateKubeSystemPrivileges
	W0319 20:41:18.904802   60008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:18.904810   60008 kubeadm.go:393] duration metric: took 5m15.275720912s to StartCluster
	I0319 20:41:18.904826   60008 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.904910   60008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:18.906545   60008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.906817   60008 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:18.908538   60008 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:18.906944   60008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:18.907019   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:41:18.910084   60008 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:18.910100   60008 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910125   60008 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:18.910135   60008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385240"
	W0319 20:41:18.910141   60008 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:18.910255   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910127   60008 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.910313   60008 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:18.910334   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910603   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910635   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910647   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910667   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910692   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910671   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.927094   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0319 20:41:18.927240   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0319 20:41:18.927517   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.927620   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928036   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928059   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928074   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0319 20:41:18.928331   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928360   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928492   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928538   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928737   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928993   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.929009   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.929046   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.929066   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929108   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.929338   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.929862   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929893   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.932815   60008 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.932838   60008 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:18.932865   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.933211   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.933241   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.945888   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0319 20:41:18.946351   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.946842   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.946869   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.947426   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.947600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.947808   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0319 20:41:18.948220   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.948367   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0319 20:41:18.948739   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.948753   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.949222   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.949277   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.951252   60008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:18.949736   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.950173   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.951720   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.952838   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.952813   60008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:18.952917   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:18.952934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.952815   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.953264   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.953460   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.955228   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.957199   60008 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:18.958698   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:18.958715   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:18.958733   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.956502   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.957073   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.958806   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.958845   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.959306   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.959485   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.959783   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.961410   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961775   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.961802   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961893   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.962065   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.962213   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.962369   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.975560   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0319 20:41:18.976026   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.976503   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.976524   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.976893   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.977128   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.978582   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.978862   60008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:18.978881   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:18.978898   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.981356   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981730   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.981762   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.982056   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.982192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.982337   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:19.126985   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:19.188792   60008 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198961   60008 node_ready.go:49] node "default-k8s-diff-port-385240" has status "Ready":"True"
	I0319 20:41:19.198981   60008 node_ready.go:38] duration metric: took 10.160382ms for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198992   60008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:19.209346   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:19.335212   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:19.414291   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:19.506570   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:19.506590   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:19.651892   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:19.651916   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:19.808237   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:19.808282   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:19.924353   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:20.583635   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169310347s)
	I0319 20:41:20.583700   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.583717   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.583981   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.583991   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.584015   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.584027   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.584253   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.584282   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585518   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250274289s)
	I0319 20:41:20.585568   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585584   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.585855   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.585879   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.585888   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585902   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585916   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.586162   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.586168   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.586177   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.609166   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.609183   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.609453   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.609492   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.609502   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.750409   60008 pod_ready.go:92] pod "coredns-76f75df574-4rq6h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:20.750433   60008 pod_ready.go:81] duration metric: took 1.541065393s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.750442   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.869692   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.869719   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.869995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.870000   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870025   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870045   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.870057   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.870336   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870352   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870366   60008 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:20.872093   60008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:41:20.873465   60008 addons.go:505] duration metric: took 1.966520277s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:41:21.260509   60008 pod_ready.go:92] pod "coredns-76f75df574-swxdt" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.260533   60008 pod_ready.go:81] duration metric: took 510.083899ms for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.260543   60008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268298   60008 pod_ready.go:92] pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.268324   60008 pod_ready.go:81] duration metric: took 7.772878ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268335   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274436   60008 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.274461   60008 pod_ready.go:81] duration metric: took 6.117464ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274472   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281324   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.281347   60008 pod_ready.go:81] duration metric: took 6.866088ms for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281367   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.593980   60008 pod_ready.go:92] pod "kube-proxy-j7ghm" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.594001   60008 pod_ready.go:81] duration metric: took 312.62702ms for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.594009   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993321   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.993346   60008 pod_ready.go:81] duration metric: took 399.330556ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993362   60008 pod_ready.go:38] duration metric: took 2.794359581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:21.993375   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:21.993423   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:22.010583   60008 api_server.go:72] duration metric: took 3.10372573s to wait for apiserver process to appear ...
	I0319 20:41:22.010609   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:22.010629   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:41:22.015218   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:41:22.016276   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:41:22.016291   60008 api_server.go:131] duration metric: took 5.6763ms to wait for apiserver health ...
	I0319 20:41:22.016298   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:22.197418   60008 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:22.197454   60008 system_pods.go:61] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.197460   60008 system_pods.go:61] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.197465   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.197470   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.197476   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.197481   60008 system_pods.go:61] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.197485   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.197494   60008 system_pods.go:61] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.197500   60008 system_pods.go:61] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.197514   60008 system_pods.go:74] duration metric: took 181.210964ms to wait for pod list to return data ...
	I0319 20:41:22.197526   60008 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:22.392702   60008 default_sa.go:45] found service account: "default"
	I0319 20:41:22.392738   60008 default_sa.go:55] duration metric: took 195.195704ms for default service account to be created ...
	I0319 20:41:22.392751   60008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:22.595946   60008 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:22.595975   60008 system_pods.go:89] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.595980   60008 system_pods.go:89] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.595985   60008 system_pods.go:89] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.595991   60008 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.595996   60008 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.596006   60008 system_pods.go:89] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.596010   60008 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.596016   60008 system_pods.go:89] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.596022   60008 system_pods.go:89] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.596034   60008 system_pods.go:126] duration metric: took 203.277741ms to wait for k8s-apps to be running ...
	I0319 20:41:22.596043   60008 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:22.596087   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:22.615372   60008 system_svc.go:56] duration metric: took 19.319488ms WaitForService to wait for kubelet
	I0319 20:41:22.615396   60008 kubeadm.go:576] duration metric: took 3.708546167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:22.615413   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:22.793277   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:22.793303   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:22.793313   60008 node_conditions.go:105] duration metric: took 177.89499ms to run NodePressure ...
	I0319 20:41:22.793325   60008 start.go:240] waiting for startup goroutines ...
	I0319 20:41:22.793331   60008 start.go:245] waiting for cluster config update ...
	I0319 20:41:22.793342   60008 start.go:254] writing updated cluster config ...
	I0319 20:41:22.793598   60008 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:22.845339   60008 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:41:22.847429   60008 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-385240" cluster and "default" namespace by default
	I0319 20:41:29.064044   59019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.598411816s)
	I0319 20:41:29.064115   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:29.082924   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:41:29.095050   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:29.106905   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:29.106918   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:29.106962   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:29.118153   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:29.118209   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:29.128632   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:29.140341   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:29.140401   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:29.151723   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.162305   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:29.162365   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.173654   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:29.185155   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:29.185211   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:29.196015   59019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:29.260934   59019 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0319 20:41:29.261054   59019 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:29.412424   59019 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:29.412592   59019 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:29.412759   59019 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:29.636019   59019 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:29.638046   59019 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:29.638158   59019 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:29.638216   59019 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:29.638279   59019 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:29.638331   59019 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:29.645456   59019 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:29.645553   59019 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:29.645610   59019 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:29.645663   59019 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:29.645725   59019 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:29.645788   59019 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:29.645822   59019 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:29.645869   59019 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:29.895850   59019 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:30.248635   59019 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:30.380474   59019 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:30.457908   59019 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:30.585194   59019 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:30.585852   59019 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:30.588394   59019 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:30.590147   59019 out.go:204]   - Booting up control plane ...
	I0319 20:41:30.590241   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:30.590353   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:30.590606   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:30.611645   59019 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:30.614010   59019 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:30.614266   59019 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:30.757838   59019 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 20:41:30.757973   59019 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0319 20:41:31.758717   59019 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001332477s
	I0319 20:41:31.758819   59019 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 20:41:37.261282   59019 kubeadm.go:309] [api-check] The API server is healthy after 5.50238s
	I0319 20:41:37.275017   59019 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:37.299605   59019 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:37.335190   59019 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:37.335449   59019 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-414130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:37.350882   59019 kubeadm.go:309] [bootstrap-token] Using token: 0euy3c.pb7fih13u47u7k5a
	I0319 20:41:37.352692   59019 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:37.352796   59019 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:37.357551   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:37.365951   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:37.369544   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:37.376066   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:37.379284   59019 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:37.669667   59019 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:38.120423   59019 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:38.668937   59019 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:38.670130   59019 kubeadm.go:309] 
	I0319 20:41:38.670236   59019 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:38.670251   59019 kubeadm.go:309] 
	I0319 20:41:38.670339   59019 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:38.670348   59019 kubeadm.go:309] 
	I0319 20:41:38.670369   59019 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:38.670451   59019 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:38.670520   59019 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:38.670530   59019 kubeadm.go:309] 
	I0319 20:41:38.670641   59019 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:38.670653   59019 kubeadm.go:309] 
	I0319 20:41:38.670720   59019 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:38.670731   59019 kubeadm.go:309] 
	I0319 20:41:38.670802   59019 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:38.670916   59019 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:38.671036   59019 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:38.671053   59019 kubeadm.go:309] 
	I0319 20:41:38.671185   59019 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:38.671332   59019 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:38.671351   59019 kubeadm.go:309] 
	I0319 20:41:38.671438   59019 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671588   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:38.671609   59019 kubeadm.go:309] 	--control-plane 
	I0319 20:41:38.671613   59019 kubeadm.go:309] 
	I0319 20:41:38.671684   59019 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:38.671693   59019 kubeadm.go:309] 
	I0319 20:41:38.671758   59019 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671877   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:38.672172   59019 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:38.672197   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:41:38.672212   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:38.674158   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:38.675618   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:38.690458   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:38.712520   59019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:38.712597   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:38.712616   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-414130 minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=no-preload-414130 minikube.k8s.io/primary=true
	I0319 20:41:38.902263   59019 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:38.902364   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.403054   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.903127   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.402786   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.903358   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.403414   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.902829   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.402506   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.903338   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.402784   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.902477   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.403152   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.903190   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.402544   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.903397   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:46.402785   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:46.902656   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.402845   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.903436   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.402511   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.903073   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.402559   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.902914   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.402708   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.903441   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.403416   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.585670   59019 kubeadm.go:1107] duration metric: took 12.873132825s to wait for elevateKubeSystemPrivileges
	W0319 20:41:51.585714   59019 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:51.585724   59019 kubeadm.go:393] duration metric: took 5m12.093644869s to StartCluster
	I0319 20:41:51.585744   59019 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.585835   59019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:51.588306   59019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.588634   59019 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:51.590331   59019 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:51.588755   59019 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:51.588891   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:41:51.590430   59019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-414130"
	I0319 20:41:51.591988   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:51.592020   59019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-414130"
	W0319 20:41:51.592038   59019 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:51.592069   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.590437   59019 addons.go:69] Setting default-storageclass=true in profile "no-preload-414130"
	I0319 20:41:51.590441   59019 addons.go:69] Setting metrics-server=true in profile "no-preload-414130"
	I0319 20:41:51.592098   59019 addons.go:234] Setting addon metrics-server=true in "no-preload-414130"
	W0319 20:41:51.592114   59019 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:51.592129   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.592164   59019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-414130"
	I0319 20:41:51.592450   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592479   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592505   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592532   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.608909   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0319 20:41:51.609383   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.609942   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.609962   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.610565   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.610774   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.612725   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0319 20:41:51.612794   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0319 20:41:51.613141   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.613637   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.613660   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.614121   59019 addons.go:234] Setting addon default-storageclass=true in "no-preload-414130"
	W0319 20:41:51.614139   59019 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:51.614167   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.614214   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.614482   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614512   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614774   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614810   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614876   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.615336   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.615369   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.615703   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.616237   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.616281   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.630175   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0319 20:41:51.630802   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.631279   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.631296   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.631645   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.632322   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.632356   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.634429   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0319 20:41:51.634865   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.635311   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.635324   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.635922   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.636075   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.637997   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.640025   59019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:51.641428   59019 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:51.641445   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:51.641462   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.644316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644838   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.644853   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644875   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0319 20:41:51.645162   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.645300   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.645365   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.645499   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.645613   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.645964   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.645976   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.646447   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.646663   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.648174   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.649872   59019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:51.651152   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:51.651177   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:51.651197   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.654111   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654523   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.654545   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654792   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.654987   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.655156   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.655281   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.656648   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0319 20:41:51.656960   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.657457   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.657471   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.657751   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.657948   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.659265   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.659503   59019 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:51.659517   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:51.659533   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.662039   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662427   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.662447   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662583   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.662757   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.662879   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.662991   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.845584   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:51.876597   59019 node_ready.go:35] waiting up to 6m0s for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886290   59019 node_ready.go:49] node "no-preload-414130" has status "Ready":"True"
	I0319 20:41:51.886308   59019 node_ready.go:38] duration metric: took 9.684309ms for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886315   59019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:51.893456   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:51.976850   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:52.031123   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:52.031144   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:52.133184   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:52.195945   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:52.195968   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:52.270721   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.270745   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:52.407604   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.578113   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578140   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578511   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578524   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.578532   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.578557   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578566   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578809   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578828   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.610849   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.610873   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.611246   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.611251   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.611269   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.342742   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.209525982s)
	I0319 20:41:53.342797   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.342808   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343131   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343159   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343163   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.343174   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.343194   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343486   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343503   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343525   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.450430   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.450458   59019 pod_ready.go:81] duration metric: took 1.556981953s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.450478   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459425   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.459454   59019 pod_ready.go:81] duration metric: took 8.967211ms for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459467   59019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495144   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.495164   59019 pod_ready.go:81] duration metric: took 35.690498ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495173   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520382   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.520412   59019 pod_ready.go:81] duration metric: took 25.23062ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520426   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530859   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.530889   59019 pod_ready.go:81] duration metric: took 10.451233ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530903   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.545946   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.13830463s)
	I0319 20:41:53.545994   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546009   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546304   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546323   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546333   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546350   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546678   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546695   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546706   59019 addons.go:470] Verifying addon metrics-server=true in "no-preload-414130"
	I0319 20:41:53.546764   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.548523   59019 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0319 20:41:53.549990   59019 addons.go:505] duration metric: took 1.961237309s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0319 20:41:53.881082   59019 pod_ready.go:92] pod "kube-proxy-m7m4h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.881107   59019 pod_ready.go:81] duration metric: took 350.197776ms for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.881116   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283891   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:54.283924   59019 pod_ready.go:81] duration metric: took 402.800741ms for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283936   59019 pod_ready.go:38] duration metric: took 2.397611991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:54.283953   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:54.284016   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:54.304606   59019 api_server.go:72] duration metric: took 2.715931012s to wait for apiserver process to appear ...
	I0319 20:41:54.304629   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:54.304651   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:41:54.309292   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:41:54.310195   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:41:54.310215   59019 api_server.go:131] duration metric: took 5.579162ms to wait for apiserver health ...
	I0319 20:41:54.310225   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:54.488441   59019 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:54.488475   59019 system_pods.go:61] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.488482   59019 system_pods.go:61] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.488487   59019 system_pods.go:61] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.488491   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.488496   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.488500   59019 system_pods.go:61] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.488505   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.488514   59019 system_pods.go:61] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.488520   59019 system_pods.go:61] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.488530   59019 system_pods.go:74] duration metric: took 178.298577ms to wait for pod list to return data ...
	I0319 20:41:54.488543   59019 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:54.679537   59019 default_sa.go:45] found service account: "default"
	I0319 20:41:54.679560   59019 default_sa.go:55] duration metric: took 191.010696ms for default service account to be created ...
	I0319 20:41:54.679569   59019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:54.884163   59019 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:54.884197   59019 system_pods.go:89] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.884205   59019 system_pods.go:89] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.884211   59019 system_pods.go:89] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.884217   59019 system_pods.go:89] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.884223   59019 system_pods.go:89] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.884230   59019 system_pods.go:89] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.884236   59019 system_pods.go:89] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.884246   59019 system_pods.go:89] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.884268   59019 system_pods.go:89] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.884281   59019 system_pods.go:126] duration metric: took 204.70598ms to wait for k8s-apps to be running ...
	I0319 20:41:54.884294   59019 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:54.884348   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:54.901838   59019 system_svc.go:56] duration metric: took 17.536645ms WaitForService to wait for kubelet
	I0319 20:41:54.901869   59019 kubeadm.go:576] duration metric: took 3.313198534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:54.901887   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:55.080463   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:55.080485   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:55.080495   59019 node_conditions.go:105] duration metric: took 178.603035ms to run NodePressure ...
	I0319 20:41:55.080507   59019 start.go:240] waiting for startup goroutines ...
	I0319 20:41:55.080513   59019 start.go:245] waiting for cluster config update ...
	I0319 20:41:55.080523   59019 start.go:254] writing updated cluster config ...
	I0319 20:41:55.080753   59019 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:55.130477   59019 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0319 20:41:55.133906   59019 out.go:177] * Done! kubectl is now configured to use "no-preload-414130" cluster and "default" namespace by default
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 
	
	
	==> CRI-O <==
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.245012843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25dab0d6-a481-4162-83ad-55e9f97f7570 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.247046140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e2a9eeb-3029-4e54-bc27-f2ceaab8eb27 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.247445466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881457247423402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e2a9eeb-3029-4e54-bc27-f2ceaab8eb27 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.248412879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bf0d8c3-590b-4249-a463-7304d79be175 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.248488442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bf0d8c3-590b-4249-a463-7304d79be175 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.248683299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bf0d8c3-590b-4249-a463-7304d79be175 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.291384182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d7b9f35-cb76-4115-8a7a-08e009a2ad0b name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.291485083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d7b9f35-cb76-4115-8a7a-08e009a2ad0b name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.292139264Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f9b6a1e9-2fc9-4120-8f90-16e5b5d51370 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.292662461Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:41ef922212b2e52d2709973e6b1055b10ae431394d42b829920f1df094e1fbaf,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-27n2b,Uid:2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880913707774360,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-27n2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:53.394470079Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880913648529855,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-19T20:41:53.341670897Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jtdrs,Uid:1199d0b5-8f7b-47ca-bdd4-af092b6150ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880912055364510,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:51.425512951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jm8cl,Uid:8c50b962-ed13-4511-
8bef-2a2657f26276,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880911785468159,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c50b962-ed13-4511-8bef-2a2657f26276,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:51.462930527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&PodSandboxMetadata{Name:kube-proxy-m7m4h,Uid:06239fd6-3053-4a7b-9a73-62886b59fa6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880911439418728,Labels:map[string]string{controller-revision-hash: 795c8646d4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:51.110744775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-414130,Uid:367fb688ce35df12d609fff66da3fca7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880891821891625,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 367fb688ce35df12d609fff66da3fca7,kubernetes.io/config.seen: 2024-03-19T20:41:31.364789828Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&PodSandboxMeta
data{Name:kube-apiserver-no-preload-414130,Uid:1b63ff8109d6dceda30dac6065446c32,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880891814600164,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.29:8443,kubernetes.io/config.hash: 1b63ff8109d6dceda30dac6065446c32,kubernetes.io/config.seen: 2024-03-19T20:41:31.364788831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-414130,Uid:5770f2895db5bf0f7197939d73721b15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880891811920605,Labels:map[string]string{component: etcd,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.29:2379,kubernetes.io/config.hash: 5770f2895db5bf0f7197939d73721b15,kubernetes.io/config.seen: 2024-03-19T20:41:31.364787394Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-414130,Uid:60115988efd12a25be1d9eccda362138,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880891790731265,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,tier: control-plane,},Annotations:map[string]string{
kubernetes.io/config.hash: 60115988efd12a25be1d9eccda362138,kubernetes.io/config.seen: 2024-03-19T20:41:31.364782944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f9b6a1e9-2fc9-4120-8f90-16e5b5d51370 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.294277091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf068943-d17b-4418-8e3c-890255aa4e23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.294342665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf068943-d17b-4418-8e3c-890255aa4e23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.294619469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf068943-d17b-4418-8e3c-890255aa4e23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.296195151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1bce1d1-4a84-4487-85d8-9a174d5219dc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.296592894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881457296574914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1bce1d1-4a84-4487-85d8-9a174d5219dc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.297913587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb411b79-9397-4b49-a919-34a46b88e24e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.298038897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb411b79-9397-4b49-a919-34a46b88e24e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.298408943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb411b79-9397-4b49-a919-34a46b88e24e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.344685807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2270621b-2e20-42ab-806b-4fd21d7068d8 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.344791855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2270621b-2e20-42ab-806b-4fd21d7068d8 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.345635259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07089ce4-4c31-4145-8618-f8af78d78128 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.346084902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881457346050790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07089ce4-4c31-4145-8618-f8af78d78128 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.346694190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f55cf4e3-5245-4387-84b8-b14ba155238c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.346749363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f55cf4e3-5245-4387-84b8-b14ba155238c name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:50:57 no-preload-414130 crio[710]: time="2024-03-19 20:50:57.346916625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f55cf4e3-5245-4387-84b8-b14ba155238c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ddf2de435243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5474f6961a001       storage-provisioner
	282175d575751       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f41e04b92463c       coredns-7db6d8ff4d-jtdrs
	f55feb03feb19       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9e896e8005373       coredns-7db6d8ff4d-jm8cl
	d1b92d6f7f1ca       3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8   9 minutes ago       Running             kube-proxy                0                   5757d6c7b01c6       kube-proxy-m7m4h
	693eb66b9864a       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   9 minutes ago       Running             kube-apiserver            2                   e516c7e2a536e       kube-apiserver-no-preload-414130
	f1b8ef909c169       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   faa7e83f965c8       etcd-no-preload-414130
	0ea3891af1838       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   9 minutes ago       Running             kube-controller-manager   2                   cfa6b9652d347       kube-controller-manager-no-preload-414130
	00bbe82194b56       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   9 minutes ago       Running             kube-scheduler            2                   fa7c3e5894b44       kube-scheduler-no-preload-414130
	
	
	==> coredns [282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-414130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-414130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=no-preload-414130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:41:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-414130
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:50:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:47:04 +0000   Tue, 19 Mar 2024 20:41:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:47:04 +0000   Tue, 19 Mar 2024 20:41:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:47:04 +0000   Tue, 19 Mar 2024 20:41:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:47:04 +0000   Tue, 19 Mar 2024 20:41:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.29
	  Hostname:    no-preload-414130
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b75323fa4d64092b46f8ef8b0374374
	  System UUID:                2b75323f-a4d6-4092-b46f-8ef8b0374374
	  Boot ID:                    fda99eb1-b91c-4a0c-8d33-8aab37267322
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-beta.0
	  Kube-Proxy Version:         v1.30.0-beta.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jm8cl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-jtdrs                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-no-preload-414130                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-414130             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-414130    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-m7m4h                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-414130             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-27n2b              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node no-preload-414130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node no-preload-414130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node no-preload-414130 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node no-preload-414130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node no-preload-414130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node no-preload-414130 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node no-preload-414130 event: Registered Node no-preload-414130 in Controller
	
	
	==> dmesg <==
	[  +0.042034] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.888028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.566903] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.754827] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.504022] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.064437] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065025] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.194322] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.153356] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.316979] systemd-fstab-generator[694]: Ignoring "noauto" option for root device
	[ +17.644577] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.072749] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.818056] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +5.588931] kauditd_printk_skb: 94 callbacks suppressed
	[  +7.363506] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.225918] kauditd_printk_skb: 20 callbacks suppressed
	[Mar19 20:41] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.885807] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +7.053887] systemd-fstab-generator[4148]: Ignoring "noauto" option for root device
	[  +0.088351] kauditd_printk_skb: 55 callbacks suppressed
	[ +13.789090] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.082567] systemd-fstab-generator[4376]: Ignoring "noauto" option for root device
	[Mar19 20:42] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52] <==
	{"level":"info","ts":"2024-03-19T20:41:32.567072Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-19T20:41:32.574188Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e012057890ddf1de","initial-advertise-peer-urls":["https://192.168.72.29:2380"],"listen-peer-urls":["https://192.168.72.29:2380"],"advertise-client-urls":["https://192.168.72.29:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.29:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-19T20:41:32.574252Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-19T20:41:32.567477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de switched to configuration voters=(16145973629461328350)"}
	{"level":"info","ts":"2024-03-19T20:41:32.574486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f8680c7cbbe1f1ff","local-member-id":"e012057890ddf1de","added-peer-id":"e012057890ddf1de","added-peer-peer-urls":["https://192.168.72.29:2380"]}
	{"level":"info","ts":"2024-03-19T20:41:32.567534Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.29:2380"}
	{"level":"info","ts":"2024-03-19T20:41:32.575627Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.29:2380"}
	{"level":"info","ts":"2024-03-19T20:41:33.424863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:33.425099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:33.425268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de received MsgPreVoteResp from e012057890ddf1de at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:33.425739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de became candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.425811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de received MsgVoteResp from e012057890ddf1de at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.425857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de became leader at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.425897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e012057890ddf1de elected leader e012057890ddf1de at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.427696Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.428861Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e012057890ddf1de","local-member-attributes":"{Name:no-preload-414130 ClientURLs:[https://192.168.72.29:2379]}","request-path":"/0/members/e012057890ddf1de/attributes","cluster-id":"f8680c7cbbe1f1ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:41:33.429147Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:41:33.429616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f8680c7cbbe1f1ff","local-member-id":"e012057890ddf1de","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.429712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.429759Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.429804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:41:33.432689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.29:2379"}
	{"level":"info","ts":"2024-03-19T20:41:33.435682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-19T20:41:33.445474Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:41:33.445532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:50:57 up 14 min,  0 users,  load average: 0.21, 0.11, 0.09
	Linux no-preload-414130 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c] <==
	I0319 20:44:54.228265       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:46:35.070279       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:35.070389       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0319 20:46:36.070510       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:36.070554       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:46:36.070566       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:46:36.070673       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:46:36.070724       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:46:36.071889       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:47:36.071206       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:47:36.071320       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:47:36.071384       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:47:36.072338       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:47:36.072485       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:47:36.072519       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:49:36.071908       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:49:36.072313       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:49:36.072350       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:49:36.073036       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:49:36.073137       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:49:36.074333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f] <==
	I0319 20:45:20.988933       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:45:50.543223       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:45:50.998627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:46:20.549174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:46:21.009565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:46:50.555463       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:46:51.018341       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:47:20.561212       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:47:21.028525       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:47:39.072170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="328.958µs"
	E0319 20:47:50.567540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:47:51.041352       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:47:53.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="70.12µs"
	E0319 20:48:20.572263       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:48:21.052283       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:48:50.578910       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:48:51.060122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:49:20.584594       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:49:21.069617       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:49:50.590504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:49:51.078615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:50:20.597437       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:50:21.088504       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:50:50.604811       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:50:51.101288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233] <==
	I0319 20:41:52.144116       1 server_linux.go:69] "Using iptables proxy"
	I0319 20:41:52.213413       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.29"]
	I0319 20:41:52.342237       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0319 20:41:52.342309       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:41:52.342326       1 server_linux.go:165] "Using iptables Proxier"
	I0319 20:41:52.349912       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:41:52.350228       1 server.go:872] "Version info" version="v1.30.0-beta.0"
	I0319 20:41:52.350270       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:41:52.356060       1 config.go:192] "Starting service config controller"
	I0319 20:41:52.357244       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0319 20:41:52.357356       1 config.go:101] "Starting endpoint slice config controller"
	I0319 20:41:52.357388       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0319 20:41:52.357648       1 config.go:319] "Starting node config controller"
	I0319 20:41:52.357682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0319 20:41:52.458181       1 shared_informer.go:320] Caches are synced for node config
	I0319 20:41:52.464152       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0319 20:41:52.464193       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0] <==
	W0319 20:41:35.093646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:35.093673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:41:35.093717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:41:35.093680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:35.093665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 20:41:35.093738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 20:41:35.974662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:41:35.974761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:41:36.003584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:36.003659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:36.028813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:41:36.028891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:41:36.083265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 20:41:36.083341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 20:41:36.128330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:41:36.128613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:41:36.236226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:36.236300       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:36.241701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 20:41:36.241759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:36.468347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:36.468405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:36.492173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:41:36.492267       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 20:41:39.586399       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:50:28 no-preload-414130 kubelet[4155]: E0319 20:50:28.042864    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:28 no-preload-414130 kubelet[4155]: E0319 20:50:28.042880    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:28 no-preload-414130 kubelet[4155]: E0319 20:50:28.043569    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:28 no-preload-414130 kubelet[4155]: E0319 20:50:28.043629    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:28 no-preload-414130 kubelet[4155]: E0319 20:50:28.043637    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:28 no-preload-414130 kubelet[4155]: E0319 20:50:28.045754    4155 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-27n2b" podUID="2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2"
	Mar 19 20:50:38 no-preload-414130 kubelet[4155]: E0319 20:50:38.085359    4155 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 19 20:50:38 no-preload-414130 kubelet[4155]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:50:38 no-preload-414130 kubelet[4155]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:50:38 no-preload-414130 kubelet[4155]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:50:38 no-preload-414130 kubelet[4155]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:50:40 no-preload-414130 kubelet[4155]: E0319 20:50:40.042495    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:40 no-preload-414130 kubelet[4155]: E0319 20:50:40.042549    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:40 no-preload-414130 kubelet[4155]: E0319 20:50:40.042558    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:40 no-preload-414130 kubelet[4155]: E0319 20:50:40.045076    4155 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-27n2b" podUID="2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2"
	Mar 19 20:50:53 no-preload-414130 kubelet[4155]: E0319 20:50:53.042685    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:53 no-preload-414130 kubelet[4155]: E0319 20:50:53.043172    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:53 no-preload-414130 kubelet[4155]: E0319 20:50:53.043260    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:53 no-preload-414130 kubelet[4155]: E0319 20:50:53.042688    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:53 no-preload-414130 kubelet[4155]: E0319 20:50:53.043385    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:53 no-preload-414130 kubelet[4155]: E0319 20:50:53.043394    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:54 no-preload-414130 kubelet[4155]: E0319 20:50:54.043006    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:54 no-preload-414130 kubelet[4155]: E0319 20:50:54.043079    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:54 no-preload-414130 kubelet[4155]: E0319 20:50:54.043086    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:50:54 no-preload-414130 kubelet[4155]: E0319 20:50:54.044484    4155 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-27n2b" podUID="2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2"
	
	
	==> storage-provisioner [5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c] <==
	I0319 20:41:53.906238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:41:53.921209       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:41:53.921274       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:41:53.934619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:41:53.934769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-414130_c816ac63-b66a-4ad5-a87f-6278ef1e14a7!
	I0319 20:41:53.935550       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"472fabba-9618-45f6-b6f3-92f3c84ff7af", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-414130_c816ac63-b66a-4ad5-a87f-6278ef1e14a7 became leader
	I0319 20:41:54.035171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-414130_c816ac63-b66a-4ad5-a87f-6278ef1e14a7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-414130 -n no-preload-414130
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-414130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-27n2b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-414130 describe pod metrics-server-569cc877fc-27n2b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-414130 describe pod metrics-server-569cc877fc-27n2b: exit status 1 (65.113783ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-27n2b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-414130 describe pod metrics-server-569cc877fc-27n2b: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
E0319 20:44:30.843959   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
E0319 20:45:04.834122   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
E0319 20:47:33.892145   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
E0319 20:49:30.844206   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
E0319 20:50:04.834193   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (251.497069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-159022" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (242.956334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-159022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-159022 logs -n 25: (1.573291894s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:33:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:33:00.489344   60008 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:33:00.489594   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489603   60008 out.go:304] Setting ErrFile to fd 2...
	I0319 20:33:00.489607   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489787   60008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:33:00.490297   60008 out.go:298] Setting JSON to false
	I0319 20:33:00.491188   60008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8078,"bootTime":1710872302,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:33:00.491245   60008 start.go:139] virtualization: kvm guest
	I0319 20:33:00.493588   60008 out.go:177] * [default-k8s-diff-port-385240] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:33:00.495329   60008 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:33:00.496506   60008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:33:00.495369   60008 notify.go:220] Checking for updates...
	I0319 20:33:00.499210   60008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:33:00.500494   60008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:33:00.501820   60008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:33:00.503200   60008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:33:00.504837   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:33:00.505191   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.505266   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.519674   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0319 20:33:00.520123   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.520634   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.520656   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.520945   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.521132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.521364   60008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:33:00.521629   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.521660   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.535764   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0319 20:33:00.536105   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.536564   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.536583   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.536890   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.537079   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.572160   60008 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:33:00.573517   60008 start.go:297] selected driver: kvm2
	I0319 20:33:00.573530   60008 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.573663   60008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:33:00.574335   60008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.574423   60008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:33:00.588908   60008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:33:00.589283   60008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:33:00.589354   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:33:00.589375   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:33:00.589419   60008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.589532   60008 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.591715   60008 out.go:177] * Starting "default-k8s-diff-port-385240" primary control-plane node in "default-k8s-diff-port-385240" cluster
	I0319 20:32:58.292485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:01.364553   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:00.593043   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:33:00.593084   60008 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:33:00.593094   60008 cache.go:56] Caching tarball of preloaded images
	I0319 20:33:00.593156   60008 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:33:00.593166   60008 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:33:00.593281   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:33:00.593454   60008 start.go:360] acquireMachinesLock for default-k8s-diff-port-385240: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:33:07.444550   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:10.516480   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:16.596485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:19.668501   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:25.748504   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:28.820525   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:34.900508   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:37.972545   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:44.052478   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:47.124492   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:53.204484   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:56.276536   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:02.356552   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:05.428529   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:11.508540   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:14.580485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:20.660521   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:23.732555   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:29.812516   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:32.884574   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:38.964472   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:42.036583   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:48.116547   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:51.188507   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:54.193037   59415 start.go:364] duration metric: took 3m51.108134555s to acquireMachinesLock for "embed-certs-421660"
	I0319 20:34:54.193108   59415 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:34:54.193120   59415 fix.go:54] fixHost starting: 
	I0319 20:34:54.193458   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:34:54.193487   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:34:54.208614   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0319 20:34:54.209078   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:34:54.209506   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:34:54.209527   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:34:54.209828   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:34:54.209992   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:34:54.210117   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:34:54.211626   59415 fix.go:112] recreateIfNeeded on embed-certs-421660: state=Stopped err=<nil>
	I0319 20:34:54.211661   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	W0319 20:34:54.211820   59415 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:34:54.213989   59415 out.go:177] * Restarting existing kvm2 VM for "embed-certs-421660" ...
	I0319 20:34:54.190431   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:34:54.190483   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.190783   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:34:54.190809   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.191021   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:34:54.192901   59019 machine.go:97] duration metric: took 4m37.398288189s to provisionDockerMachine
	I0319 20:34:54.192939   59019 fix.go:56] duration metric: took 4m37.41948201s for fixHost
	I0319 20:34:54.192947   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 4m37.419503815s
	W0319 20:34:54.192970   59019 start.go:713] error starting host: provision: host is not running
	W0319 20:34:54.193060   59019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0319 20:34:54.193071   59019 start.go:728] Will try again in 5 seconds ...
	I0319 20:34:54.215391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Start
	I0319 20:34:54.215559   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring networks are active...
	I0319 20:34:54.216249   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network default is active
	I0319 20:34:54.216543   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network mk-embed-certs-421660 is active
	I0319 20:34:54.216902   59415 main.go:141] libmachine: (embed-certs-421660) Getting domain xml...
	I0319 20:34:54.217595   59415 main.go:141] libmachine: (embed-certs-421660) Creating domain...
	I0319 20:34:55.407058   59415 main.go:141] libmachine: (embed-certs-421660) Waiting to get IP...
	I0319 20:34:55.407855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.408280   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.408343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.408247   60323 retry.go:31] will retry after 202.616598ms: waiting for machine to come up
	I0319 20:34:55.612753   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.613313   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.613341   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.613247   60323 retry.go:31] will retry after 338.618778ms: waiting for machine to come up
	I0319 20:34:55.953776   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.954230   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.954259   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.954164   60323 retry.go:31] will retry after 389.19534ms: waiting for machine to come up
	I0319 20:34:56.344417   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.344855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.344886   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.344822   60323 retry.go:31] will retry after 555.697854ms: waiting for machine to come up
	I0319 20:34:56.902547   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.902990   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.903017   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.902955   60323 retry.go:31] will retry after 702.649265ms: waiting for machine to come up
	I0319 20:34:57.606823   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:57.607444   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:57.607484   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:57.607388   60323 retry.go:31] will retry after 814.886313ms: waiting for machine to come up
	I0319 20:34:59.194634   59019 start.go:360] acquireMachinesLock for no-preload-414130: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:34:58.424559   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:58.425066   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:58.425088   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:58.425011   60323 retry.go:31] will retry after 948.372294ms: waiting for machine to come up
	I0319 20:34:59.375490   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:59.375857   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:59.375884   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:59.375809   60323 retry.go:31] will retry after 1.206453994s: waiting for machine to come up
	I0319 20:35:00.584114   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:00.584548   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:00.584572   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:00.584496   60323 retry.go:31] will retry after 1.200177378s: waiting for machine to come up
	I0319 20:35:01.786803   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:01.787139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:01.787167   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:01.787085   60323 retry.go:31] will retry after 1.440671488s: waiting for machine to come up
	I0319 20:35:03.229775   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:03.230179   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:03.230216   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:03.230146   60323 retry.go:31] will retry after 2.073090528s: waiting for machine to come up
	I0319 20:35:05.305427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:05.305904   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:05.305930   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:05.305859   60323 retry.go:31] will retry after 3.463824423s: waiting for machine to come up
	I0319 20:35:08.773517   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:08.773911   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:08.773938   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:08.773873   60323 retry.go:31] will retry after 4.159170265s: waiting for machine to come up
	I0319 20:35:12.937475   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937965   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has current primary IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937979   59415 main.go:141] libmachine: (embed-certs-421660) Found IP for machine: 192.168.50.108
	I0319 20:35:12.937987   59415 main.go:141] libmachine: (embed-certs-421660) Reserving static IP address...
	I0319 20:35:12.938372   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.938400   59415 main.go:141] libmachine: (embed-certs-421660) DBG | skip adding static IP to network mk-embed-certs-421660 - found existing host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"}
	I0319 20:35:12.938412   59415 main.go:141] libmachine: (embed-certs-421660) Reserved static IP address: 192.168.50.108
	I0319 20:35:12.938435   59415 main.go:141] libmachine: (embed-certs-421660) Waiting for SSH to be available...
	I0319 20:35:12.938448   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Getting to WaitForSSH function...
	I0319 20:35:12.940523   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.940897   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.940932   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.941037   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH client type: external
	I0319 20:35:12.941069   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa (-rw-------)
	I0319 20:35:12.941102   59415 main.go:141] libmachine: (embed-certs-421660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:12.941116   59415 main.go:141] libmachine: (embed-certs-421660) DBG | About to run SSH command:
	I0319 20:35:12.941128   59415 main.go:141] libmachine: (embed-certs-421660) DBG | exit 0
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:13.068386   59415 main.go:141] libmachine: (embed-certs-421660) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:13.068756   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetConfigRaw
	I0319 20:35:13.069421   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.071751   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072101   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.072133   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072393   59415 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:35:13.072557   59415 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:13.072574   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:13.072781   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.075005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.075369   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.075678   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075816   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075973   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.076134   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.076364   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.076382   59415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:13.188983   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:13.189017   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189291   59415 buildroot.go:166] provisioning hostname "embed-certs-421660"
	I0319 20:35:13.189319   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189503   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.191881   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192190   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.192210   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192389   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.192550   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192696   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192818   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.192989   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.193145   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.193159   59415 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-421660 && echo "embed-certs-421660" | sudo tee /etc/hostname
	I0319 20:35:13.326497   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-421660
	
	I0319 20:35:13.326524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.329344   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.329765   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.330179   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330372   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330547   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.330753   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.330928   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.330943   59415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-421660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-421660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-421660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:13.454265   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:13.454297   59415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:13.454320   59415 buildroot.go:174] setting up certificates
	I0319 20:35:13.454334   59415 provision.go:84] configureAuth start
	I0319 20:35:13.454348   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.454634   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.457258   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457692   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.457723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457834   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.460123   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460436   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.460463   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460587   59415 provision.go:143] copyHostCerts
	I0319 20:35:13.460643   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:13.460652   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:13.460719   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:13.460815   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:13.460822   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:13.460846   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:13.460917   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:13.460924   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:13.460945   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:13.461004   59415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.embed-certs-421660 san=[127.0.0.1 192.168.50.108 embed-certs-421660 localhost minikube]
	I0319 20:35:13.553348   59415 provision.go:177] copyRemoteCerts
	I0319 20:35:13.553399   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:13.553424   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.555729   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556036   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.556071   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556199   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.556406   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.556579   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.556725   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:13.642780   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:35:13.670965   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:13.698335   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:13.724999   59415 provision.go:87] duration metric: took 270.652965ms to configureAuth
	I0319 20:35:13.725022   59415 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:13.725174   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:13.725235   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.727653   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.727969   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.727988   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.728186   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.728410   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728581   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728783   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.728960   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.729113   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.729130   59415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:14.012527   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:14.012554   59415 machine.go:97] duration metric: took 939.982813ms to provisionDockerMachine
	I0319 20:35:14.012568   59415 start.go:293] postStartSetup for "embed-certs-421660" (driver="kvm2")
	I0319 20:35:14.012582   59415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:14.012616   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.012969   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:14.012996   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.015345   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015706   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.015759   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015864   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.016069   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.016269   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.016409   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.105236   59415 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:14.110334   59415 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:14.110363   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:14.110435   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:14.110534   59415 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:14.110623   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:14.120911   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:14.148171   59415 start.go:296] duration metric: took 135.590484ms for postStartSetup
	I0319 20:35:14.148209   59415 fix.go:56] duration metric: took 19.955089617s for fixHost
	I0319 20:35:14.148234   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.150788   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.151165   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151331   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.151514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151667   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151784   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.151953   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:14.152125   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:14.152138   59415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:14.265435   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880514.234420354
	
	I0319 20:35:14.265467   59415 fix.go:216] guest clock: 1710880514.234420354
	I0319 20:35:14.265478   59415 fix.go:229] Guest: 2024-03-19 20:35:14.234420354 +0000 UTC Remote: 2024-03-19 20:35:14.148214105 +0000 UTC m=+251.208119911 (delta=86.206249ms)
	I0319 20:35:14.265507   59415 fix.go:200] guest clock delta is within tolerance: 86.206249ms
	I0319 20:35:14.265516   59415 start.go:83] releasing machines lock for "embed-certs-421660", held for 20.072435424s
	I0319 20:35:14.265554   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.265868   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:14.268494   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268846   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.268874   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269589   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269751   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269833   59415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:14.269884   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.269956   59415 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:14.269972   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.272604   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272771   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272978   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273137   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273140   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273160   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273316   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273337   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273473   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273685   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.273738   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.358033   59415 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:14.385511   59415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:14.542052   59415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:14.549672   59415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:14.549747   59415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:14.569110   59415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:14.569137   59415 start.go:494] detecting cgroup driver to use...
	I0319 20:35:14.569193   59415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:14.586644   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:14.601337   59415 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:14.601407   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:14.616158   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:14.631754   59415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:14.746576   59415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:14.902292   59415 docker.go:233] disabling docker service ...
	I0319 20:35:14.902353   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:14.920787   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:14.938865   59415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:15.078791   59415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:15.214640   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:15.242992   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:15.264698   59415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:15.264755   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.276750   59415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:15.276817   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.288643   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.300368   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.318906   59415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:15.338660   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.351908   59415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.372022   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.384124   59415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:15.395206   59415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:15.395268   59415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:15.411193   59415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:15.422031   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:15.572313   59415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:15.730316   59415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:15.730389   59415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:15.738539   59415 start.go:562] Will wait 60s for crictl version
	I0319 20:35:15.738600   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:35:15.743107   59415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:15.788582   59415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:15.788666   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.819444   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.859201   59415 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:15.860492   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:15.863655   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864032   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:15.864063   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864303   59415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:15.870600   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:15.885694   59415 kubeadm.go:877] updating cluster {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:15.885833   59415 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:15.885890   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:15.924661   59415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:15.924736   59415 ssh_runner.go:195] Run: which lz4
	I0319 20:35:15.929595   59415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:15.934980   59415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:15.935014   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:17.673355   59415 crio.go:462] duration metric: took 1.743798593s to copy over tarball
	I0319 20:35:17.673428   59415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:20.169596   59415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49612841s)
	I0319 20:35:20.169629   59415 crio.go:469] duration metric: took 2.496240167s to extract the tarball
	I0319 20:35:20.169639   59415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:20.208860   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:20.261040   59415 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:35:20.261063   59415 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:35:20.261071   59415 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.29.3 crio true true} ...
	I0319 20:35:20.261162   59415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-421660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:20.261227   59415 ssh_runner.go:195] Run: crio config
	I0319 20:35:20.311322   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:20.311346   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:20.311359   59415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:20.311377   59415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-421660 NodeName:embed-certs-421660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:35:20.311501   59415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-421660"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:20.311560   59415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:35:20.323700   59415 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:20.323776   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:20.334311   59415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0319 20:35:20.352833   59415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:20.372914   59415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0319 20:35:20.391467   59415 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:20.395758   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:20.408698   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:20.532169   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:20.550297   59415 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660 for IP: 192.168.50.108
	I0319 20:35:20.550320   59415 certs.go:194] generating shared ca certs ...
	I0319 20:35:20.550339   59415 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:20.550507   59415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:20.550574   59415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:20.550586   59415 certs.go:256] generating profile certs ...
	I0319 20:35:20.550700   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/client.key
	I0319 20:35:20.550774   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key.e5ca10b2
	I0319 20:35:20.550824   59415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key
	I0319 20:35:20.550954   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:20.550988   59415 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:20.551001   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:20.551037   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:20.551070   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:20.551101   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:20.551155   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:20.552017   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:20.583444   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:20.616935   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:20.673499   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:20.707988   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0319 20:35:20.734672   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:35:20.761302   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:20.792511   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:20.819903   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:20.848361   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:20.878230   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:20.908691   59415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:20.930507   59415 ssh_runner.go:195] Run: openssl version
	I0319 20:35:20.937088   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:20.949229   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954299   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954343   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.960610   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:20.972162   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:20.984137   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989211   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989273   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.995436   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:21.007076   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:21.018552   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024109   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024146   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.030344   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:21.041615   59415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:21.046986   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:21.053533   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:21.060347   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:21.067155   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:21.074006   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:21.080978   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:21.087615   59415 kubeadm.go:391] StartCluster: {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:21.087695   59415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:21.087745   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.131217   59415 cri.go:89] found id: ""
	I0319 20:35:21.131294   59415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:21.143460   59415 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:21.143487   59415 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:21.143493   59415 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:21.143545   59415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:21.156145   59415 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:21.157080   59415 kubeconfig.go:125] found "embed-certs-421660" server: "https://192.168.50.108:8443"
	I0319 20:35:21.158865   59415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:21.171515   59415 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.108
	I0319 20:35:21.171551   59415 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:21.171561   59415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:21.171607   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.221962   59415 cri.go:89] found id: ""
	I0319 20:35:21.222028   59415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:21.239149   59415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:21.250159   59415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:21.250185   59415 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:21.250242   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:21.260035   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:21.260107   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:21.270804   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:21.281041   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:21.281106   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:21.291796   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.301883   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:21.301943   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.313038   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:21.323390   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:21.323462   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:21.333893   59415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:21.344645   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:21.491596   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.349871   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.592803   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.670220   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.802978   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:22.803071   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:23.303983   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.803254   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.846475   59415 api_server.go:72] duration metric: took 1.043496842s to wait for apiserver process to appear ...
	I0319 20:35:23.846509   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:35:23.846532   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:23.847060   59415 api_server.go:269] stopped: https://192.168.50.108:8443/healthz: Get "https://192.168.50.108:8443/healthz": dial tcp 192.168.50.108:8443: connect: connection refused
	I0319 20:35:24.347376   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.456794   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.456826   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.456841   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.492793   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.492827   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.847365   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.857297   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:26.857327   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.346936   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.351748   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:27.351775   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.847430   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.852157   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:35:27.868953   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:35:27.869006   59415 api_server.go:131] duration metric: took 4.022477349s to wait for apiserver health ...
	I0319 20:35:27.869019   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:27.869029   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:27.871083   59415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:35:27.872669   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:35:27.886256   59415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:35:27.912891   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:35:27.928055   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:35:27.928088   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:35:27.928095   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:35:27.928102   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:35:27.928108   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:35:27.928113   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:35:27.928118   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:35:27.928126   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:35:27.928130   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:35:27.928136   59415 system_pods.go:74] duration metric: took 15.221738ms to wait for pod list to return data ...
	I0319 20:35:27.928142   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:35:27.931854   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:35:27.931876   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:35:27.931888   59415 node_conditions.go:105] duration metric: took 3.74189ms to run NodePressure ...
	I0319 20:35:27.931903   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:28.209912   59415 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215315   59415 kubeadm.go:733] kubelet initialised
	I0319 20:35:28.215343   59415 kubeadm.go:734] duration metric: took 5.403708ms waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215353   59415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:28.221636   59415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.230837   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230868   59415 pod_ready.go:81] duration metric: took 9.198177ms for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.230878   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230887   59415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.237452   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237472   59415 pod_ready.go:81] duration metric: took 6.569363ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.237479   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237485   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.242902   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242919   59415 pod_ready.go:81] duration metric: took 5.427924ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.242926   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242931   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.316859   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316889   59415 pod_ready.go:81] duration metric: took 73.950437ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.316901   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316908   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.717107   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717133   59415 pod_ready.go:81] duration metric: took 400.215265ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.717143   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717151   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.117365   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117403   59415 pod_ready.go:81] duration metric: took 400.242952ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.117416   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117427   59415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.517914   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517950   59415 pod_ready.go:81] duration metric: took 400.512217ms for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.517962   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517974   59415 pod_ready.go:38] duration metric: took 1.302609845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:29.518009   59415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:35:29.534665   59415 ops.go:34] apiserver oom_adj: -16
	I0319 20:35:29.534686   59415 kubeadm.go:591] duration metric: took 8.39118752s to restartPrimaryControlPlane
	I0319 20:35:29.534697   59415 kubeadm.go:393] duration metric: took 8.447087595s to StartCluster
	I0319 20:35:29.534713   59415 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.534814   59415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:29.536379   59415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.536620   59415 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:35:29.538397   59415 out.go:177] * Verifying Kubernetes components...
	I0319 20:35:29.536707   59415 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:35:29.536837   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:29.539696   59415 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-421660"
	I0319 20:35:29.539709   59415 addons.go:69] Setting metrics-server=true in profile "embed-certs-421660"
	I0319 20:35:29.539739   59415 addons.go:234] Setting addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:29.539747   59415 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-421660"
	W0319 20:35:29.539751   59415 addons.go:243] addon metrics-server should already be in state true
	W0319 20:35:29.539757   59415 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:35:29.539782   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539786   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539700   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:29.539700   59415 addons.go:69] Setting default-storageclass=true in profile "embed-certs-421660"
	I0319 20:35:29.539882   59415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-421660"
	I0319 20:35:29.540079   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540098   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540107   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540120   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540243   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540282   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.554668   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0319 20:35:29.554742   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0319 20:35:29.554815   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0319 20:35:29.555109   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555148   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555220   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555703   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555708   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555722   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555726   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555828   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555847   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.556077   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556206   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556273   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.556627   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556669   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.556753   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556787   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.559109   59415 addons.go:234] Setting addon default-storageclass=true in "embed-certs-421660"
	W0319 20:35:29.559126   59415 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:35:29.559150   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.559390   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.559425   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.570567   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0319 20:35:29.571010   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.571467   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.571492   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.571831   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.572018   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.573621   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.575889   59415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:35:29.574300   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0319 20:35:29.574529   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0319 20:35:29.577448   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:35:29.577473   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:35:29.577496   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.577913   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.577957   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.578350   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578382   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.578751   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.578877   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578901   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.579318   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.579431   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.579495   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.579509   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.580582   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581050   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.581074   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581166   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.581276   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.583314   59415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:29.581522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.584941   59415 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.584951   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:35:29.584963   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.584980   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.585154   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.587700   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588076   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.588104   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588289   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.588463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.588614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.588791   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.594347   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0319 20:35:29.594626   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.595030   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.595062   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.595384   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.595524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.596984   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.597209   59415 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.597224   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:35:29.597238   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.599955   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.600457   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600533   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.600682   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.600829   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.600926   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.719989   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:29.737348   59415 node_ready.go:35] waiting up to 6m0s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:29.839479   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.839994   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:35:29.840016   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:35:29.852112   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.904335   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:35:29.904358   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:35:29.969646   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:29.969675   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:35:30.031528   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:31.120085   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.280572793s)
	I0319 20:35:31.120135   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120148   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120172   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268019206s)
	I0319 20:35:31.120214   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120229   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120430   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120448   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120457   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120544   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120564   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120588   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120606   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120758   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120788   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120827   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120833   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120841   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.127070   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.127085   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.127287   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.127301   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.138956   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.107385118s)
	I0319 20:35:31.139006   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139027   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139257   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139301   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139319   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139330   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139342   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139546   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139550   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139564   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139579   59415 addons.go:470] Verifying addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:31.141587   59415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:31.142927   59415 addons.go:505] duration metric: took 1.606231661s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:35:31.741584   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.101840   60008 start.go:364] duration metric: took 2m35.508355671s to acquireMachinesLock for "default-k8s-diff-port-385240"
	I0319 20:35:36.101908   60008 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:36.101921   60008 fix.go:54] fixHost starting: 
	I0319 20:35:36.102308   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:36.102352   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:36.118910   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0319 20:35:36.119363   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:36.119926   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:35:36.119957   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:36.120271   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:36.120450   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:36.120614   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:35:36.122085   60008 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385240: state=Stopped err=<nil>
	I0319 20:35:36.122112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	W0319 20:35:36.122284   60008 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:36.124242   60008 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385240" ...
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:33.742461   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.241937   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.743420   59415 node_ready.go:49] node "embed-certs-421660" has status "Ready":"True"
	I0319 20:35:36.743447   59415 node_ready.go:38] duration metric: took 7.006070851s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:36.743458   59415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:36.749810   59415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:36.125778   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Start
	I0319 20:35:36.125974   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring networks are active...
	I0319 20:35:36.126542   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network default is active
	I0319 20:35:36.126934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network mk-default-k8s-diff-port-385240 is active
	I0319 20:35:36.127367   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Getting domain xml...
	I0319 20:35:36.128009   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Creating domain...
	I0319 20:35:37.396589   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting to get IP...
	I0319 20:35:37.397626   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398211   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398294   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.398203   60655 retry.go:31] will retry after 263.730992ms: waiting for machine to come up
	I0319 20:35:37.663811   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664345   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664379   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.664300   60655 retry.go:31] will retry after 308.270868ms: waiting for machine to come up
	I0319 20:35:37.974625   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975061   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975095   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.975027   60655 retry.go:31] will retry after 376.884777ms: waiting for machine to come up
	I0319 20:35:38.353624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354101   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354129   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.354056   60655 retry.go:31] will retry after 419.389718ms: waiting for machine to come up
	I0319 20:35:38.774777   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775271   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775299   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.775224   60655 retry.go:31] will retry after 757.534448ms: waiting for machine to come up
	I0319 20:35:39.534258   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534739   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534766   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:39.534698   60655 retry.go:31] will retry after 921.578914ms: waiting for machine to come up
	I0319 20:35:40.457637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458154   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:40.458092   60655 retry.go:31] will retry after 1.079774724s: waiting for machine to come up
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:38.759504   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.258580   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.539643   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540163   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:41.540113   60655 retry.go:31] will retry after 1.174814283s: waiting for machine to come up
	I0319 20:35:42.716195   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716547   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716576   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:42.716510   60655 retry.go:31] will retry after 1.464439025s: waiting for machine to come up
	I0319 20:35:44.183190   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183673   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183701   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:44.183628   60655 retry.go:31] will retry after 2.304816358s: waiting for machine to come up
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:43.756917   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:44.893540   59415 pod_ready.go:92] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.893576   59415 pod_ready.go:81] duration metric: took 8.143737931s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.893592   59415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903602   59415 pod_ready.go:92] pod "etcd-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.903640   59415 pod_ready.go:81] duration metric: took 10.03087ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903653   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926651   59415 pod_ready.go:92] pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.926682   59415 pod_ready.go:81] duration metric: took 23.020281ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926696   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935080   59415 pod_ready.go:92] pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.935113   59415 pod_ready.go:81] duration metric: took 8.409239ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935126   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947241   59415 pod_ready.go:92] pod "kube-proxy-qvn26" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.947269   59415 pod_ready.go:81] duration metric: took 12.135421ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947280   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155416   59415 pod_ready.go:92] pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:45.155441   59415 pod_ready.go:81] duration metric: took 208.152938ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155460   59415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:47.165059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:46.490600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491092   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491121   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:46.491050   60655 retry.go:31] will retry after 2.347371858s: waiting for machine to come up
	I0319 20:35:48.841516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.841995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.842018   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:48.841956   60655 retry.go:31] will retry after 2.70576525s: waiting for machine to come up
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.662363   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.662389   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.549431   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549931   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549959   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:51.549900   60655 retry.go:31] will retry after 3.429745322s: waiting for machine to come up
	I0319 20:35:54.983382   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.983875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Found IP for machine: 192.168.39.77
	I0319 20:35:54.983908   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserving static IP address...
	I0319 20:35:54.983923   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has current primary IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.984212   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.984240   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserved static IP address: 192.168.39.77
	I0319 20:35:54.984292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | skip adding static IP to network mk-default-k8s-diff-port-385240 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"}
	I0319 20:35:54.984307   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for SSH to be available...
	I0319 20:35:54.984322   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Getting to WaitForSSH function...
	I0319 20:35:54.986280   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986591   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.986624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986722   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH client type: external
	I0319 20:35:54.986752   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa (-rw-------)
	I0319 20:35:54.986783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:54.986796   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | About to run SSH command:
	I0319 20:35:54.986805   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | exit 0
	I0319 20:35:55.112421   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:55.112825   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetConfigRaw
	I0319 20:35:55.113456   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.115976   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116349   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.116377   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116587   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:35:55.116847   60008 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:55.116874   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:55.117099   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.119475   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.119911   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.119947   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.120112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.120312   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120478   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120629   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.120793   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.120970   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.120982   60008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:55.229055   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:55.229090   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229360   60008 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385240"
	I0319 20:35:55.229390   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229594   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.232039   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232371   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.232391   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232574   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.232746   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232866   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232967   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.233087   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.233251   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.233264   60008 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385240 && echo "default-k8s-diff-port-385240" | sudo tee /etc/hostname
	I0319 20:35:55.355708   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385240
	
	I0319 20:35:55.355732   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.358292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358610   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.358641   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358880   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.359105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359267   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359415   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.359545   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.359701   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.359724   60008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:55.479083   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:55.479109   60008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:55.479126   60008 buildroot.go:174] setting up certificates
	I0319 20:35:55.479134   60008 provision.go:84] configureAuth start
	I0319 20:35:55.479143   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.479433   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.482040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482378   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.482408   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482535   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.484637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485035   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.485062   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485212   60008 provision.go:143] copyHostCerts
	I0319 20:35:55.485272   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:55.485283   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:55.485334   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:55.485425   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:55.485434   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:55.485454   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:55.485560   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:55.485569   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:55.485586   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:55.485642   60008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385240 san=[127.0.0.1 192.168.39.77 default-k8s-diff-port-385240 localhost minikube]
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.449710   59019 start.go:364] duration metric: took 57.255031003s to acquireMachinesLock for "no-preload-414130"
	I0319 20:35:56.449774   59019 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:56.449786   59019 fix.go:54] fixHost starting: 
	I0319 20:35:56.450187   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:56.450225   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:56.469771   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0319 20:35:56.470265   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:56.470764   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:35:56.470799   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:56.471187   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:56.471362   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:35:56.471545   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:35:56.473295   59019 fix.go:112] recreateIfNeeded on no-preload-414130: state=Stopped err=<nil>
	I0319 20:35:56.473323   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	W0319 20:35:56.473480   59019 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:56.475296   59019 out.go:177] * Restarting existing kvm2 VM for "no-preload-414130" ...
	I0319 20:35:56.476767   59019 main.go:141] libmachine: (no-preload-414130) Calling .Start
	I0319 20:35:56.476947   59019 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:35:56.477657   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:35:56.478036   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:35:56.478443   59019 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:35:56.479131   59019 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:35:53.663220   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:56.163557   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:55.738705   60008 provision.go:177] copyRemoteCerts
	I0319 20:35:55.738779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:55.738812   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.741292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741618   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.741644   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741835   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.741997   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.742105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.742260   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:55.828017   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:55.854341   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0319 20:35:55.881167   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:55.906768   60008 provision.go:87] duration metric: took 427.621358ms to configureAuth
	I0319 20:35:55.906795   60008 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:55.907007   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:55.907097   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.909518   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.909834   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.909863   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.910008   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.910193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910492   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.910670   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.910835   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.910849   60008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:56.207010   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:56.207036   60008 machine.go:97] duration metric: took 1.090170805s to provisionDockerMachine
	I0319 20:35:56.207049   60008 start.go:293] postStartSetup for "default-k8s-diff-port-385240" (driver="kvm2")
	I0319 20:35:56.207066   60008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:56.207086   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.207410   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:56.207435   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.210075   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210494   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.210526   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210671   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.210828   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.211016   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.211167   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.295687   60008 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:56.300508   60008 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:56.300531   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:56.300601   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:56.300677   60008 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:56.300779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:56.310829   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:56.337456   60008 start.go:296] duration metric: took 130.396402ms for postStartSetup
	I0319 20:35:56.337492   60008 fix.go:56] duration metric: took 20.235571487s for fixHost
	I0319 20:35:56.337516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.339907   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340361   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.340388   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340552   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.340749   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.340888   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.341040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.341198   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:56.341357   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:56.341367   60008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:56.449557   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880556.425761325
	
	I0319 20:35:56.449580   60008 fix.go:216] guest clock: 1710880556.425761325
	I0319 20:35:56.449587   60008 fix.go:229] Guest: 2024-03-19 20:35:56.425761325 +0000 UTC Remote: 2024-03-19 20:35:56.337496936 +0000 UTC m=+175.893119280 (delta=88.264389ms)
	I0319 20:35:56.449619   60008 fix.go:200] guest clock delta is within tolerance: 88.264389ms
	I0319 20:35:56.449624   60008 start.go:83] releasing machines lock for "default-k8s-diff-port-385240", held for 20.347739998s
	I0319 20:35:56.449647   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.449915   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:56.452764   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453172   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.453204   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453363   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.453973   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454275   60008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:56.454328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.454443   60008 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:56.454466   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.457060   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457284   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457383   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457418   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457536   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457555   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457567   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457831   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457977   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458126   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458139   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.458282   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.537675   60008 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:56.564279   60008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:56.708113   60008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:56.716216   60008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:56.716301   60008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:56.738625   60008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:56.738643   60008 start.go:494] detecting cgroup driver to use...
	I0319 20:35:56.738707   60008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:56.756255   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:56.772725   60008 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:56.772785   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:56.793261   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:56.812368   60008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:56.948137   60008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:57.139143   60008 docker.go:233] disabling docker service ...
	I0319 20:35:57.139212   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:57.156414   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:57.173655   60008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:57.313924   60008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:57.459539   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:57.478913   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:57.506589   60008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:57.506663   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.520813   60008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:57.520871   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.534524   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.547833   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.568493   60008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:57.582367   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.595859   60008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.616441   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.633329   60008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:57.648803   60008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:57.648886   60008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:57.667845   60008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:57.680909   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:57.825114   60008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:57.996033   60008 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:57.996118   60008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:58.001875   60008 start.go:562] Will wait 60s for crictl version
	I0319 20:35:58.001947   60008 ssh_runner.go:195] Run: which crictl
	I0319 20:35:58.006570   60008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:58.060545   60008 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:58.060628   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.104858   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.148992   60008 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:58.150343   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:58.153222   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153634   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:58.153663   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153924   60008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:58.158830   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:58.174622   60008 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:58.174760   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:58.174819   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:58.220802   60008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:58.220879   60008 ssh_runner.go:195] Run: which lz4
	I0319 20:35:58.225914   60008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:58.230673   60008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:58.230702   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:59.959612   60008 crio.go:462] duration metric: took 1.733738299s to copy over tarball
	I0319 20:35:59.959694   60008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.782684   59019 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:35:57.783613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:57.784088   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:57.784180   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:57.784077   60806 retry.go:31] will retry after 304.011729ms: waiting for machine to come up
	I0319 20:35:58.089864   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.090398   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.090431   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.090325   60806 retry.go:31] will retry after 268.702281ms: waiting for machine to come up
	I0319 20:35:58.360743   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.361173   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.361201   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.361116   60806 retry.go:31] will retry after 373.34372ms: waiting for machine to come up
	I0319 20:35:58.735810   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.736490   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.736518   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.736439   60806 retry.go:31] will retry after 588.9164ms: waiting for machine to come up
	I0319 20:35:59.327363   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.327908   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.327938   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.327881   60806 retry.go:31] will retry after 623.38165ms: waiting for machine to come up
	I0319 20:35:59.952641   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.953108   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.953138   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.953090   60806 retry.go:31] will retry after 896.417339ms: waiting for machine to come up
	I0319 20:36:00.851032   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:00.851485   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:00.851514   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:00.851435   60806 retry.go:31] will retry after 869.189134ms: waiting for machine to come up
	I0319 20:35:58.168341   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:00.664629   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:02.594104   60008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634373226s)
	I0319 20:36:02.594140   60008 crio.go:469] duration metric: took 2.634502157s to extract the tarball
	I0319 20:36:02.594149   60008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:36:02.635454   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:02.692442   60008 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:36:02.692468   60008 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:36:02.692477   60008 kubeadm.go:928] updating node { 192.168.39.77 8444 v1.29.3 crio true true} ...
	I0319 20:36:02.692613   60008 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-385240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:02.692697   60008 ssh_runner.go:195] Run: crio config
	I0319 20:36:02.749775   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:02.749798   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:02.749809   60008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:02.749828   60008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385240 NodeName:default-k8s-diff-port-385240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:02.749967   60008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-385240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:02.750034   60008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:36:02.760788   60008 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:02.760843   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:02.770999   60008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0319 20:36:02.789881   60008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:36:02.809005   60008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:36:02.831122   60008 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:02.835609   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:02.850186   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:02.990032   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:03.013831   60008 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240 for IP: 192.168.39.77
	I0319 20:36:03.013858   60008 certs.go:194] generating shared ca certs ...
	I0319 20:36:03.013879   60008 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:03.014072   60008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:03.014125   60008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:03.014137   60008 certs.go:256] generating profile certs ...
	I0319 20:36:03.014256   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.key
	I0319 20:36:03.014325   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key.5c19d013
	I0319 20:36:03.014389   60008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key
	I0319 20:36:03.014549   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:03.014602   60008 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:03.014626   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:03.014658   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:03.014691   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:03.014728   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:03.014793   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:03.015673   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:03.070837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:03.115103   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:03.150575   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:03.210934   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 20:36:03.254812   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:36:03.286463   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:03.315596   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:03.347348   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:03.375837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:03.407035   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:03.439726   60008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:03.461675   60008 ssh_runner.go:195] Run: openssl version
	I0319 20:36:03.468238   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:03.482384   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487682   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487739   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.494591   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:03.509455   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:03.522545   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527556   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527617   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.533925   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:03.546851   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:03.559553   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564547   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564595   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.570824   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:03.584339   60008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:03.589542   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:03.595870   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:03.602530   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:03.609086   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:03.615621   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:03.622477   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:03.629097   60008 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:03.629186   60008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:03.629234   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.674484   60008 cri.go:89] found id: ""
	I0319 20:36:03.674568   60008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:03.686995   60008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:03.687020   60008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:03.687026   60008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:03.687094   60008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:03.702228   60008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:03.703334   60008 kubeconfig.go:125] found "default-k8s-diff-port-385240" server: "https://192.168.39.77:8444"
	I0319 20:36:03.705508   60008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:03.719948   60008 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.77
	I0319 20:36:03.719985   60008 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:03.719997   60008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:03.720073   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.761557   60008 cri.go:89] found id: ""
	I0319 20:36:03.761619   60008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:03.781849   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:03.793569   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:03.793601   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:03.793652   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:36:03.804555   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:03.804605   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:03.816728   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:36:03.828247   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:03.828318   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:03.840814   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.853100   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:03.853168   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.867348   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:36:03.879879   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:03.879944   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:03.893810   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:03.906056   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:04.038911   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.173514   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134566983s)
	I0319 20:36:05.173547   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.395951   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.480821   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.721671   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:01.722186   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:01.722212   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:01.722142   60806 retry.go:31] will retry after 997.299446ms: waiting for machine to come up
	I0319 20:36:02.720561   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:02.721007   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:02.721037   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:02.720958   60806 retry.go:31] will retry after 1.64420318s: waiting for machine to come up
	I0319 20:36:04.367668   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:04.368140   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:04.368179   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:04.368083   60806 retry.go:31] will retry after 1.972606192s: waiting for machine to come up
	I0319 20:36:06.342643   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:06.343192   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:06.343236   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:06.343136   60806 retry.go:31] will retry after 2.056060208s: waiting for machine to come up
	I0319 20:36:03.164447   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.665089   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.581797   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:05.581879   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.082565   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.582872   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.628756   60008 api_server.go:72] duration metric: took 1.046965637s to wait for apiserver process to appear ...
	I0319 20:36:06.628786   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:06.628808   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:06.629340   60008 api_server.go:269] stopped: https://192.168.39.77:8444/healthz: Get "https://192.168.39.77:8444/healthz": dial tcp 192.168.39.77:8444: connect: connection refused
	I0319 20:36:07.128890   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.231991   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.232024   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.232039   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.280784   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.280820   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.629356   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.660326   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:09.660434   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.128936   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.139305   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:10.139336   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.629187   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.635922   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:36:10.654111   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:36:10.654137   60008 api_server.go:131] duration metric: took 4.025345365s to wait for apiserver health ...
	I0319 20:36:10.654146   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:10.654154   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:10.656104   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.401478   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:08.402086   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:08.402111   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:08.402001   60806 retry.go:31] will retry after 2.487532232s: waiting for machine to come up
	I0319 20:36:10.891005   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:10.891550   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:10.891591   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:10.891503   60806 retry.go:31] will retry after 3.741447035s: waiting for machine to come up
	I0319 20:36:08.163468   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.165537   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:12.661667   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.657654   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:10.672795   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:10.715527   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:10.728811   60008 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:10.728850   60008 system_pods.go:61] "coredns-76f75df574-hsdk2" [319e5411-97e4-4021-80d0-b39195acb696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:10.728862   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [d10870b0-a0e1-47aa-baf9-07065c1d9142] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:10.728873   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [4925af1b-328f-42ee-b2ef-78b58fcbdd0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:10.728883   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [6dad1c39-3fbc-4364-9ed8-725c0f518191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:10.728889   60008 system_pods.go:61] "kube-proxy-bwj22" [9cc86566-612e-48bc-94c9-a2dad6978c92] Running
	I0319 20:36:10.728896   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [e9c38443-ea8c-4590-94ca-61077f850b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:10.728904   60008 system_pods.go:61] "metrics-server-57f55c9bc5-ddl2q" [ecb174e4-18b0-459e-afb1-137a1f6bdd67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:10.728919   60008 system_pods.go:61] "storage-provisioner" [95fb27b5-769c-4420-8021-3d97942c9f42] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0319 20:36:10.728931   60008 system_pods.go:74] duration metric: took 13.321799ms to wait for pod list to return data ...
	I0319 20:36:10.728944   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:10.743270   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:10.743312   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:10.743326   60008 node_conditions.go:105] duration metric: took 14.37332ms to run NodePressure ...
	I0319 20:36:10.743348   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:11.028786   60008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034096   60008 kubeadm.go:733] kubelet initialised
	I0319 20:36:11.034115   60008 kubeadm.go:734] duration metric: took 5.302543ms waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034122   60008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:11.040118   60008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.046021   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046048   60008 pod_ready.go:81] duration metric: took 5.906752ms for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.046060   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046069   60008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.051677   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051700   60008 pod_ready.go:81] duration metric: took 5.61463ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.051712   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051721   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.057867   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057893   60008 pod_ready.go:81] duration metric: took 6.163114ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.057905   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057912   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:13.065761   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.634526   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:14.635125   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:14.635155   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:14.635074   60806 retry.go:31] will retry after 3.841866145s: waiting for machine to come up
	I0319 20:36:14.662669   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.664913   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:15.565340   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:17.567623   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:19.570775   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.479276   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.479810   59019 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:36:18.479836   59019 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:36:18.479852   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.480232   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.480279   59019 main.go:141] libmachine: (no-preload-414130) DBG | skip adding static IP to network mk-no-preload-414130 - found existing host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"}
	I0319 20:36:18.480297   59019 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:36:18.480319   59019 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:36:18.480336   59019 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:36:18.482725   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483025   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.483052   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483228   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:36:18.483262   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:36:18.483299   59019 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:36:18.483320   59019 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:36:18.483373   59019 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:36:18.612349   59019 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:36:18.612766   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:36:18.613495   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.616106   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616459   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.616498   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616729   59019 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:36:18.616940   59019 machine.go:94] provisionDockerMachine start ...
	I0319 20:36:18.616957   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:18.617150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.619316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619599   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.619620   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619750   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.619895   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620054   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620166   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.620339   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.620508   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.620521   59019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:36:18.729177   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:36:18.729203   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729483   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:36:18.729511   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729728   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.732330   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732633   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.732664   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.732944   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733087   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733211   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.733347   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.733513   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.733528   59019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:36:18.857142   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:36:18.857178   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.860040   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860434   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.860465   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860682   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.860907   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861102   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861283   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.861462   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.861661   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.861685   59019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:36:18.976726   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:36:18.976755   59019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:36:18.976776   59019 buildroot.go:174] setting up certificates
	I0319 20:36:18.976789   59019 provision.go:84] configureAuth start
	I0319 20:36:18.976803   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.977095   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.980523   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.980948   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.980976   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.981150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.983394   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983720   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.983741   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983887   59019 provision.go:143] copyHostCerts
	I0319 20:36:18.983949   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:36:18.983959   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:36:18.984009   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:36:18.984092   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:36:18.984099   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:36:18.984118   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:36:18.984224   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:36:18.984237   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:36:18.984284   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:36:18.984348   59019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:36:19.241365   59019 provision.go:177] copyRemoteCerts
	I0319 20:36:19.241422   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:36:19.241445   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.244060   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244362   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.244388   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244593   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.244781   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.244956   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.245125   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.332749   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:36:19.360026   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:36:19.386680   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:36:19.414673   59019 provision.go:87] duration metric: took 437.87318ms to configureAuth
	I0319 20:36:19.414697   59019 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:36:19.414893   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:36:19.414964   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.417627   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.417949   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.417974   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.418139   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.418351   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418513   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418687   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.418854   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.419099   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.419120   59019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:36:19.712503   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:36:19.712538   59019 machine.go:97] duration metric: took 1.095583423s to provisionDockerMachine
	I0319 20:36:19.712554   59019 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:36:19.712573   59019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:36:19.712595   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.712918   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:36:19.712953   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.715455   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715779   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.715813   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715917   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.716098   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.716307   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.716455   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.801402   59019 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:36:19.806156   59019 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:36:19.806181   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:36:19.806253   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:36:19.806330   59019 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:36:19.806451   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:36:19.818601   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:19.845698   59019 start.go:296] duration metric: took 133.131789ms for postStartSetup
	I0319 20:36:19.845728   59019 fix.go:56] duration metric: took 23.395944884s for fixHost
	I0319 20:36:19.845746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.848343   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848727   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.848760   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848909   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.849090   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849256   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849452   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.849667   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.849843   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.849853   59019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:36:19.957555   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880579.901731357
	
	I0319 20:36:19.957574   59019 fix.go:216] guest clock: 1710880579.901731357
	I0319 20:36:19.957581   59019 fix.go:229] Guest: 2024-03-19 20:36:19.901731357 +0000 UTC Remote: 2024-03-19 20:36:19.845732308 +0000 UTC m=+363.236094224 (delta=55.999049ms)
	I0319 20:36:19.957612   59019 fix.go:200] guest clock delta is within tolerance: 55.999049ms
	I0319 20:36:19.957625   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 23.507874645s
	I0319 20:36:19.957656   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.957889   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:19.960613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.960930   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.960957   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.961108   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961627   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961804   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961883   59019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:36:19.961930   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.961996   59019 ssh_runner.go:195] Run: cat /version.json
	I0319 20:36:19.962022   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.964593   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.964790   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965034   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965057   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965250   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965368   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965397   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965416   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965529   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965611   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965677   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965764   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.965788   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965893   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:20.041410   59019 ssh_runner.go:195] Run: systemctl --version
	I0319 20:36:20.067540   59019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:36:20.214890   59019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:36:20.222680   59019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:36:20.222735   59019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:36:20.239981   59019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:36:20.240003   59019 start.go:494] detecting cgroup driver to use...
	I0319 20:36:20.240066   59019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:36:20.260435   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:36:20.277338   59019 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:36:20.277398   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:36:20.294069   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:36:20.309777   59019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:36:20.443260   59019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:36:20.595476   59019 docker.go:233] disabling docker service ...
	I0319 20:36:20.595552   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:36:20.612622   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:36:20.627717   59019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:36:20.790423   59019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:36:20.915434   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:36:20.932043   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:36:20.953955   59019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:36:20.954026   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.966160   59019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:36:20.966230   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.978217   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.990380   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.002669   59019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:36:21.014880   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.026125   59019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.045239   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.056611   59019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:36:21.067763   59019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:36:21.067818   59019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:36:21.084054   59019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:36:21.095014   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:21.237360   59019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:36:21.396979   59019 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:36:21.397047   59019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:36:21.402456   59019 start.go:562] Will wait 60s for crictl version
	I0319 20:36:21.402509   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.406963   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:36:21.446255   59019 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:36:21.446351   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.477273   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.519196   59019 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:36:21.520520   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:21.523401   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.523792   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:21.523822   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.524033   59019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:36:21.528973   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:21.543033   59019 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:36:21.543154   59019 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:36:21.543185   59019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:21.583439   59019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:36:21.583472   59019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:36:21.583515   59019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.583551   59019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.583566   59019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.583610   59019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.583622   59019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.583646   59019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.583731   59019 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:36:21.583766   59019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585216   59019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585225   59019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.585236   59019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.585210   59019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.585247   59019 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:36:21.585253   59019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.585285   59019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.585297   59019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:19.163241   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:21.165282   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:22.071931   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:24.567506   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.567537   60008 pod_ready.go:81] duration metric: took 13.509614974s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.567553   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573414   60008 pod_ready.go:92] pod "kube-proxy-bwj22" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.573444   60008 pod_ready.go:81] duration metric: took 5.881434ms for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573457   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580429   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.580452   60008 pod_ready.go:81] duration metric: took 6.984808ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580463   60008 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.722682   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.727610   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:36:21.738933   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.740326   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.772871   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.801213   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.829968   59019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:36:21.830008   59019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.830053   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.832291   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.945513   59019 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:36:21.945558   59019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.945612   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945618   59019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:36:21.945651   59019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.945663   59019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:36:21.945687   59019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.945695   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945721   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970009   59019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:36:21.970052   59019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.970079   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.970090   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970100   59019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:36:21.970125   59019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.970149   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970177   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:22.062153   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:36:22.062260   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.063754   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.063840   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.091003   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091052   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:22.091104   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091335   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:22.091372   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0319 20:36:22.091382   59019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091405   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:36:22.091423   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0319 20:36:22.091426   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091475   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:22.096817   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0319 20:36:22.155139   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.155289   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.190022   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0319 20:36:22.190072   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.190166   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.507872   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445006   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.353551966s)
	I0319 20:36:26.445031   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0319 20:36:26.445049   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445063   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (4.289744726s)
	I0319 20:36:26.445095   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445099   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445107   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (4.254920134s)
	I0319 20:36:26.445135   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445176   59019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.937263856s)
	I0319 20:36:26.445228   59019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:36:26.445254   59019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445296   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:23.665322   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.167485   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.588550   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:29.088665   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.407117   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (1.96198659s)
	I0319 20:36:28.407156   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:36:28.407176   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407171   59019 ssh_runner.go:235] Completed: which crictl: (1.961850083s)
	I0319 20:36:28.407212   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407244   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:30.495567   59019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088296063s)
	I0319 20:36:30.495590   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.088358118s)
	I0319 20:36:30.495606   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:36:30.495617   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:36:30.495633   59019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495686   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495735   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:28.662588   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.163637   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.589581   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:34.090180   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.473194   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977482574s)
	I0319 20:36:32.473238   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:36:32.473263   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:32.473260   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.977498716s)
	I0319 20:36:32.473294   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0319 20:36:32.473311   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:34.927774   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.454440131s)
	I0319 20:36:34.927813   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:36:34.927842   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:34.927888   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:33.664608   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.163358   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.588459   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:38.590173   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.512011   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.584091271s)
	I0319 20:36:37.512048   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0319 20:36:37.512077   59019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:37.512134   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:38.589202   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.077040733s)
	I0319 20:36:38.589231   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0319 20:36:38.589263   59019 cache_images.go:123] Successfully loaded all cached images
	I0319 20:36:38.589278   59019 cache_images.go:92] duration metric: took 17.005785801s to LoadCachedImages
	I0319 20:36:38.589291   59019 kubeadm.go:928] updating node { 192.168.72.29 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:36:38.589415   59019 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-414130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:38.589495   59019 ssh_runner.go:195] Run: crio config
	I0319 20:36:38.648312   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:38.648334   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:38.648346   59019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:38.648366   59019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-414130 NodeName:no-preload-414130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:38.648494   59019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-414130"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:38.648554   59019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:36:38.665850   59019 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:38.665928   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:38.678211   59019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0319 20:36:38.701657   59019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:36:38.721498   59019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0319 20:36:38.741159   59019 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:38.745617   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:38.759668   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:38.896211   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:38.916698   59019 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130 for IP: 192.168.72.29
	I0319 20:36:38.916720   59019 certs.go:194] generating shared ca certs ...
	I0319 20:36:38.916748   59019 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:38.916888   59019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:38.916930   59019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:38.916943   59019 certs.go:256] generating profile certs ...
	I0319 20:36:38.917055   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.key
	I0319 20:36:38.917134   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key.2d7d554c
	I0319 20:36:38.917185   59019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key
	I0319 20:36:38.917324   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:38.917381   59019 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:38.917396   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:38.917434   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:38.917469   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:38.917501   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:38.917553   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:38.918130   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:38.959630   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:39.007656   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:39.046666   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:39.078901   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:36:39.116600   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:36:39.158517   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:39.188494   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:39.218770   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:39.247341   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:39.275816   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:39.303434   59019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:39.326445   59019 ssh_runner.go:195] Run: openssl version
	I0319 20:36:39.333373   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:39.346280   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352619   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352686   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.359796   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:39.372480   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:39.384231   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389760   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389818   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.396639   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:39.408887   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:39.421847   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427779   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427848   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.434447   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:39.446945   59019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:39.452219   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:39.458729   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:39.465298   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:39.471931   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:39.478810   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:39.485551   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:39.492084   59019 kubeadm.go:391] StartCluster: {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:39.492210   59019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:39.492297   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.535094   59019 cri.go:89] found id: ""
	I0319 20:36:39.535157   59019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:39.549099   59019 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:39.549123   59019 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:39.549129   59019 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:39.549179   59019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:39.560565   59019 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:39.561570   59019 kubeconfig.go:125] found "no-preload-414130" server: "https://192.168.72.29:8443"
	I0319 20:36:39.563750   59019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:39.578708   59019 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.29
	I0319 20:36:39.578746   59019 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:39.578756   59019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:39.578799   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.620091   59019 cri.go:89] found id: ""
	I0319 20:36:39.620152   59019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:39.639542   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:39.652115   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:39.652133   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:39.652190   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:36:39.664047   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:39.664114   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:39.675218   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:36:39.685482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:39.685533   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:39.695803   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.705482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:39.705538   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.715747   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:36:39.725260   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:39.725324   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:39.735246   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:39.745069   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:39.862945   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.548185   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.794369   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.891458   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.992790   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:40.992871   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.493489   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.164706   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:40.662753   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:42.663084   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.087924   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:43.087987   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.993208   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.040237   59019 api_server.go:72] duration metric: took 1.047447953s to wait for apiserver process to appear ...
	I0319 20:36:42.040278   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:42.040323   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:42.040927   59019 api_server.go:269] stopped: https://192.168.72.29:8443/healthz: Get "https://192.168.72.29:8443/healthz": dial tcp 192.168.72.29:8443: connect: connection refused
	I0319 20:36:42.541457   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.853765   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:44.853796   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:44.853834   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.967607   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:44.967648   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.040791   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.049359   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.049400   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.541024   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.545880   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.545907   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.041423   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.046075   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.046101   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.541147   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.546547   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.546587   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:44.664041   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.163545   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.040899   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.046413   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.046453   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:47.541051   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.547309   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.547334   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.040856   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.046293   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:48.046318   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.540858   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.545311   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:36:48.551941   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:36:48.551962   59019 api_server.go:131] duration metric: took 6.511678507s to wait for apiserver health ...
	I0319 20:36:48.551970   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:48.551976   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:48.553824   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:45.588011   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.589644   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:50.088130   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:48.555094   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:48.566904   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:48.592246   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:48.603249   59019 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:48.603277   59019 system_pods.go:61] "coredns-7db6d8ff4d-t42ph" [bc831304-6e17-452d-8059-22bb46bad525] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:48.603284   59019 system_pods.go:61] "etcd-no-preload-414130" [e2ac0f77-fade-4ac6-a472-58df4040a57d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:48.603294   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [1128c23f-0cc6-4cd4-aeed-32f3d4570e2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:48.603300   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [b03747b6-c3ed-44cf-bcc8-dc2cea408100] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:48.603304   59019 system_pods.go:61] "kube-proxy-dttkh" [23ac1cd6-588b-4745-9c0b-740f9f0e684c] Running
	I0319 20:36:48.603313   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [99fde84c-78d6-4c57-8889-c0d9f3b55a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:48.603318   59019 system_pods.go:61] "metrics-server-569cc877fc-jvlnl" [318246fd-b809-40fa-8aff-78eb33ea10fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:48.603322   59019 system_pods.go:61] "storage-provisioner" [80470118-b092-4ba1-b830-d6f13173434d] Running
	I0319 20:36:48.603327   59019 system_pods.go:74] duration metric: took 11.054488ms to wait for pod list to return data ...
	I0319 20:36:48.603336   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:48.606647   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:48.606667   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:48.606678   59019 node_conditions.go:105] duration metric: took 3.33741ms to run NodePressure ...
	I0319 20:36:48.606693   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:48.888146   59019 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898053   59019 kubeadm.go:733] kubelet initialised
	I0319 20:36:48.898073   59019 kubeadm.go:734] duration metric: took 9.903203ms waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898082   59019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:48.911305   59019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:50.918568   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:49.664061   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.162467   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.588174   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:55.088783   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:52.920852   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:54.419386   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.419410   59019 pod_ready.go:81] duration metric: took 5.508083852s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.419420   59019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926059   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.926081   59019 pod_ready.go:81] duration metric: took 506.65554ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926090   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930519   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.930538   59019 pod_ready.go:81] duration metric: took 4.441479ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930546   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.436969   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.436991   59019 pod_ready.go:81] duration metric: took 506.439126ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.437002   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443096   59019 pod_ready.go:92] pod "kube-proxy-dttkh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.443120   59019 pod_ready.go:81] duration metric: took 6.110267ms for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443132   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465091   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:56.465114   59019 pod_ready.go:81] duration metric: took 1.021974956s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465123   59019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.163556   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.663128   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:57.589188   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.093044   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:58.471804   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.478113   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:58.663274   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.663336   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.663834   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.588018   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.087994   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:02.973736   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.471180   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.161734   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.161995   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.091895   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.588324   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:07.971754   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.974080   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.162947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:11.661800   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.088585   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.588430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:12.475641   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.971849   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:13.662232   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:15.663770   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.588577   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.590602   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.972091   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.972471   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.473526   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.161717   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:20.166272   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:22.661804   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.088608   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:23.587236   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:23.975755   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.472094   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:24.663592   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.664475   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:25.589800   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.087868   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.088398   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:28.472705   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.972037   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:29.162647   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.662382   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:32.088569   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.587686   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:32.972676   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:35.471024   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.161665   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.169093   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.587810   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.590891   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:37.472396   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:39.472831   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.662719   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:40.663337   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:41.088396   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.089545   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.972560   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:44.471857   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.162059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.163324   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:47.662016   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.588501   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:48.087736   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:50.091413   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:46.972679   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.473552   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.665655   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.164565   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.587562   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.587929   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:51.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.471542   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.472616   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.663037   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.666932   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.588855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.087276   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:58.971607   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.474225   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.165581   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.169140   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.087715   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.092449   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.972924   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.473336   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.665054   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.666413   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.588698   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.087857   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.088761   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:08.475109   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.973956   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.162101   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.663715   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:12.588671   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.088453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:13.473479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.971583   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:13.162747   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.163005   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.662157   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.587779   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:19.588889   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:18.473455   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.473644   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.162290   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:22.167423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.589226   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.090617   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:22.473728   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.971753   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.663722   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.665702   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.589574   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.088379   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:38:26.972361   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.471757   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.473307   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:28.666420   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.162701   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.089336   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.587759   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.473519   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:35.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.665536   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.161727   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.087816   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.587570   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:38.471727   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.472995   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.162339   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.162741   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:42.661979   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.588023   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.088381   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:45.091312   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:42.474517   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.971377   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.662325   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.663603   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:47.587861   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:50.086555   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.975287   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.475667   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.164849   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:51.661947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.087983   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.588099   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:51.971311   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:53.971907   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:55.972358   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.162991   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.163316   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.588120   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:59.090000   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:57.973293   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.471215   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:58.662516   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.663859   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.588049   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.589283   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.471981   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:04.472495   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.164344   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:05.665651   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:06.086880   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.088337   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:06.971135   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.971957   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.473482   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.162486   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.662096   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:12.662841   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.587271   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:13.087563   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:15.088454   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:13.972462   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.471598   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:14.664028   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.665749   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.587968   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.087138   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.471693   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.473198   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:19.162423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.164242   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:22.087921   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.089243   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:22.971141   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.971943   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:23.663805   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.162590   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.587708   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.588720   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.972042   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.972160   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:30.973402   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.664060   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.161446   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.092087   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.588849   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.473205   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.971076   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.162170   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.164007   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:37.662820   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.588912   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:38.086910   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:40.089385   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:37.971651   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.973467   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.663277   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.162392   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.587250   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.589855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.472493   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.973064   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.162740   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:45.162232   59415 pod_ready.go:81] duration metric: took 4m0.006756965s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:39:45.162255   59415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:39:45.162262   59415 pod_ready.go:38] duration metric: took 4m8.418792568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:39:45.162277   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:39:45.162309   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.162363   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.219659   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:45.219685   59415 cri.go:89] found id: ""
	I0319 20:39:45.219694   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:45.219737   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.225012   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.225072   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.268783   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.268803   59415 cri.go:89] found id: ""
	I0319 20:39:45.268810   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:45.268875   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.273758   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.273813   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:45.316870   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.316893   59415 cri.go:89] found id: ""
	I0319 20:39:45.316901   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:45.316942   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.321910   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:45.321968   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:45.360077   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:45.360098   59415 cri.go:89] found id: ""
	I0319 20:39:45.360105   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:45.360157   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.365517   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:45.365580   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:45.407686   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:45.407704   59415 cri.go:89] found id: ""
	I0319 20:39:45.407711   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:45.407752   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.412894   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:45.412954   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:45.451930   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:45.451953   59415 cri.go:89] found id: ""
	I0319 20:39:45.451964   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:45.452009   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.456634   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:45.456699   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:45.498575   59415 cri.go:89] found id: ""
	I0319 20:39:45.498601   59415 logs.go:276] 0 containers: []
	W0319 20:39:45.498611   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:45.498619   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:45.498678   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:45.548381   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.548400   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.548405   59415 cri.go:89] found id: ""
	I0319 20:39:45.548411   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:45.548469   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.553470   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.558445   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:45.558471   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.603464   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:45.603490   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.650631   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:45.650663   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:45.668744   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:45.668775   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:45.823596   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:45.823625   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.891879   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:45.891911   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.944237   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:45.944284   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:46.005819   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:46.005848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:46.069819   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.069848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.648008   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.648051   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:46.701035   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.701073   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.753159   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:46.753189   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:46.804730   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:46.804767   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:47.087453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.088165   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:47.471171   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.472374   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:49.351186   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.368780   59415 api_server.go:72] duration metric: took 4m19.832131165s to wait for apiserver process to appear ...
	I0319 20:39:49.368806   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:39:49.368844   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:49.368913   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:49.408912   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:49.408937   59415 cri.go:89] found id: ""
	I0319 20:39:49.408947   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:49.409010   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.414194   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:49.414263   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:49.456271   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.456298   59415 cri.go:89] found id: ""
	I0319 20:39:49.456307   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:49.456374   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.461250   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:49.461316   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:49.510029   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:49.510052   59415 cri.go:89] found id: ""
	I0319 20:39:49.510061   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:49.510119   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.515604   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:49.515667   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:49.561004   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:49.561026   59415 cri.go:89] found id: ""
	I0319 20:39:49.561034   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:49.561100   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.566205   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:49.566276   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:49.610666   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:49.610685   59415 cri.go:89] found id: ""
	I0319 20:39:49.610693   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:49.610735   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.615683   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:49.615730   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:49.657632   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:49.657648   59415 cri.go:89] found id: ""
	I0319 20:39:49.657655   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:49.657711   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.662128   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:49.662172   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:49.699037   59415 cri.go:89] found id: ""
	I0319 20:39:49.699060   59415 logs.go:276] 0 containers: []
	W0319 20:39:49.699068   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:49.699074   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:49.699131   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:49.754331   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:49.754353   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:49.754359   59415 cri.go:89] found id: ""
	I0319 20:39:49.754368   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:49.754437   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.759210   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.763797   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:49.763816   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.818285   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:49.818314   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:49.946232   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:49.946266   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.994160   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:49.994186   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:50.042893   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:50.042923   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:50.099333   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:50.099362   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:50.547046   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:50.547082   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:50.593081   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:50.593111   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:50.632611   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:50.632643   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:50.689610   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:50.689641   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:50.707961   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:50.707997   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:50.752684   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:50.752713   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:50.790114   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:50.790139   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:51.089647   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.588183   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:39:51.972170   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:54.471260   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:56.472093   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.338254   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:39:53.343669   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:39:53.344796   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:39:53.344816   59415 api_server.go:131] duration metric: took 3.976004163s to wait for apiserver health ...
	I0319 20:39:53.344824   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:39:53.344854   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:53.344896   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:53.407914   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.407939   59415 cri.go:89] found id: ""
	I0319 20:39:53.407948   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:53.408000   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.414299   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:53.414360   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:53.466923   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:53.466944   59415 cri.go:89] found id: ""
	I0319 20:39:53.466953   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:53.467006   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.472181   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:53.472247   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:53.511808   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:53.511830   59415 cri.go:89] found id: ""
	I0319 20:39:53.511839   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:53.511900   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.517386   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:53.517445   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:53.560360   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:53.560383   59415 cri.go:89] found id: ""
	I0319 20:39:53.560390   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:53.560433   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.565131   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:53.565181   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:53.611243   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:53.611264   59415 cri.go:89] found id: ""
	I0319 20:39:53.611273   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:53.611326   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.616327   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:53.616391   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:53.656775   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:53.656794   59415 cri.go:89] found id: ""
	I0319 20:39:53.656801   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:53.656846   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.661915   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:53.661966   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:53.700363   59415 cri.go:89] found id: ""
	I0319 20:39:53.700389   59415 logs.go:276] 0 containers: []
	W0319 20:39:53.700396   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:53.700401   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:53.700454   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:53.750337   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:53.750357   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:53.750360   59415 cri.go:89] found id: ""
	I0319 20:39:53.750373   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:53.750426   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.755835   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.761078   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:53.761099   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:53.812898   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:53.812928   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:53.934451   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:53.934482   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.989117   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:53.989148   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:54.386028   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:54.386060   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:54.437864   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:54.437893   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:54.456559   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:54.456584   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:54.506564   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:54.506593   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:54.551120   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:54.551151   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:54.595768   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:54.595794   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:54.637715   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:54.637745   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:54.689666   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:54.689706   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:54.731821   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:54.731851   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:57.287839   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:39:57.287866   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.287870   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.287874   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.287878   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.287881   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.287884   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.287890   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.287894   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.287901   59415 system_pods.go:74] duration metric: took 3.943071923s to wait for pod list to return data ...
	I0319 20:39:57.287907   59415 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:39:57.290568   59415 default_sa.go:45] found service account: "default"
	I0319 20:39:57.290587   59415 default_sa.go:55] duration metric: took 2.674741ms for default service account to be created ...
	I0319 20:39:57.290594   59415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:39:57.296691   59415 system_pods.go:86] 8 kube-system pods found
	I0319 20:39:57.296710   59415 system_pods.go:89] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.296718   59415 system_pods.go:89] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.296722   59415 system_pods.go:89] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.296726   59415 system_pods.go:89] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.296730   59415 system_pods.go:89] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.296734   59415 system_pods.go:89] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.296741   59415 system_pods.go:89] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.296747   59415 system_pods.go:89] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.296753   59415 system_pods.go:126] duration metric: took 6.154905ms to wait for k8s-apps to be running ...
	I0319 20:39:57.296762   59415 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:39:57.296803   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:57.313729   59415 system_svc.go:56] duration metric: took 16.960151ms WaitForService to wait for kubelet
	I0319 20:39:57.313753   59415 kubeadm.go:576] duration metric: took 4m27.777105553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:39:57.313777   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:39:57.316765   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:39:57.316789   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:39:57.316803   59415 node_conditions.go:105] duration metric: took 3.021397ms to run NodePressure ...
	I0319 20:39:57.316813   59415 start.go:240] waiting for startup goroutines ...
	I0319 20:39:57.316820   59415 start.go:245] waiting for cluster config update ...
	I0319 20:39:57.316830   59415 start.go:254] writing updated cluster config ...
	I0319 20:39:57.317087   59415 ssh_runner.go:195] Run: rm -f paused
	I0319 20:39:57.365814   59415 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:39:57.368111   59415 out.go:177] * Done! kubectl is now configured to use "embed-certs-421660" cluster and "default" namespace by default
	I0319 20:39:56.088199   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.088480   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.091027   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.971917   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.972329   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:02.589430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.088313   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:03.474330   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.972928   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:07.587315   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:09.588829   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:08.471254   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:10.472963   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.087905   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:14.589786   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.973661   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:15.471559   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.087489   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.087559   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.473159   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.975538   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:21.090446   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:23.588215   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.581466   60008 pod_ready.go:81] duration metric: took 4m0.000988658s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:24.581495   60008 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:40:24.581512   60008 pod_ready.go:38] duration metric: took 4m13.547382951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:24.581535   60008 kubeadm.go:591] duration metric: took 4m20.894503953s to restartPrimaryControlPlane
	W0319 20:40:24.581583   60008 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:24.581611   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:22.472853   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.972183   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:26.973460   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:28.974127   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:31.475479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:33.973020   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:36.471909   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:38.473008   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:40.975638   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:43.473149   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:45.474566   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.972615   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:50.472593   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:52.973302   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:55.472067   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:56.465422   59019 pod_ready.go:81] duration metric: took 4m0.000285496s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:56.465453   59019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0319 20:40:56.465495   59019 pod_ready.go:38] duration metric: took 4m7.567400515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:56.465521   59019 kubeadm.go:591] duration metric: took 4m16.916387223s to restartPrimaryControlPlane
	W0319 20:40:56.465574   59019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:56.465604   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:56.963018   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.381377433s)
	I0319 20:40:56.963106   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:40:56.982252   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:40:56.994310   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:40:57.004950   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:40:57.004974   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:40:57.005018   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:40:57.015009   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:40:57.015070   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:40:57.026153   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:40:57.036560   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:40:57.036611   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:40:57.047469   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.060137   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:40:57.060188   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.073305   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:40:57.083299   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:40:57.083372   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:40:57.093788   60008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:40:57.352358   60008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:05.910387   60008 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:41:05.910460   60008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:05.910542   60008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:05.910660   60008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:05.910798   60008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:05.910903   60008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:05.912366   60008 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:05.912439   60008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:05.912493   60008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:05.912563   60008 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:05.912614   60008 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:05.912673   60008 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:05.912726   60008 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:05.912809   60008 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:05.912874   60008 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:05.912975   60008 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:05.913082   60008 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:05.913142   60008 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:05.913197   60008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:05.913258   60008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:05.913363   60008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:05.913439   60008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:05.913536   60008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:05.913616   60008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:05.913738   60008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:05.913841   60008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:05.915394   60008 out.go:204]   - Booting up control plane ...
	I0319 20:41:05.915486   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:05.915589   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:05.915682   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:05.915832   60008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:05.915951   60008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:05.916010   60008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:05.916154   60008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:05.916255   60008 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505433 seconds
	I0319 20:41:05.916392   60008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:05.916545   60008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:05.916628   60008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:05.916839   60008 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-385240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:05.916908   60008 kubeadm.go:309] [bootstrap-token] Using token: y9pq78.ls188thm3dr5dool
	I0319 20:41:05.918444   60008 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:05.918567   60008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:05.918654   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:05.918821   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:05.918999   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:05.919147   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:05.919260   60008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:05.919429   60008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:05.919498   60008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:05.919572   60008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:05.919582   60008 kubeadm.go:309] 
	I0319 20:41:05.919665   60008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:05.919678   60008 kubeadm.go:309] 
	I0319 20:41:05.919787   60008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:05.919799   60008 kubeadm.go:309] 
	I0319 20:41:05.919834   60008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:05.919929   60008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:05.920007   60008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:05.920017   60008 kubeadm.go:309] 
	I0319 20:41:05.920102   60008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:05.920112   60008 kubeadm.go:309] 
	I0319 20:41:05.920182   60008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:05.920191   60008 kubeadm.go:309] 
	I0319 20:41:05.920284   60008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:05.920411   60008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:05.920506   60008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:05.920520   60008 kubeadm.go:309] 
	I0319 20:41:05.920648   60008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:05.920762   60008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:05.920771   60008 kubeadm.go:309] 
	I0319 20:41:05.920901   60008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921063   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:05.921099   60008 kubeadm.go:309] 	--control-plane 
	I0319 20:41:05.921105   60008 kubeadm.go:309] 
	I0319 20:41:05.921207   60008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:05.921216   60008 kubeadm.go:309] 
	I0319 20:41:05.921285   60008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921386   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:05.921397   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:41:05.921403   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:05.922921   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:05.924221   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:05.941888   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:06.040294   60008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:06.040378   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.040413   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-385240 minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=default-k8s-diff-port-385240 minikube.k8s.io/primary=true
	I0319 20:41:06.104038   60008 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:06.266168   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.766345   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.266622   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.766418   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.266864   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.766777   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.266420   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.766319   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:10.266990   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:10.766714   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.266839   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.767222   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.266933   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.766390   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.266562   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.766618   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.267159   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.767010   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.266307   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.767002   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.266488   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.766567   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.266789   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.766935   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.266312   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.767202   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.904766   60008 kubeadm.go:1107] duration metric: took 12.864451937s to wait for elevateKubeSystemPrivileges
	W0319 20:41:18.904802   60008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:18.904810   60008 kubeadm.go:393] duration metric: took 5m15.275720912s to StartCluster
	I0319 20:41:18.904826   60008 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.904910   60008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:18.906545   60008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.906817   60008 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:18.908538   60008 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:18.906944   60008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:18.907019   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:41:18.910084   60008 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:18.910100   60008 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910125   60008 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:18.910135   60008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385240"
	W0319 20:41:18.910141   60008 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:18.910255   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910127   60008 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.910313   60008 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:18.910334   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910603   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910635   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910647   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910667   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910692   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910671   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.927094   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0319 20:41:18.927240   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0319 20:41:18.927517   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.927620   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928036   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928059   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928074   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0319 20:41:18.928331   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928360   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928492   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928538   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928737   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928993   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.929009   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.929046   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.929066   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929108   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.929338   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.929862   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929893   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.932815   60008 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.932838   60008 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:18.932865   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.933211   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.933241   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.945888   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0319 20:41:18.946351   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.946842   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.946869   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.947426   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.947600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.947808   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0319 20:41:18.948220   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.948367   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0319 20:41:18.948739   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.948753   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.949222   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.949277   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.951252   60008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:18.949736   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.950173   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.951720   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.952838   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.952813   60008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:18.952917   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:18.952934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.952815   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.953264   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.953460   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.955228   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.957199   60008 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:18.958698   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:18.958715   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:18.958733   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.956502   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.957073   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.958806   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.958845   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.959306   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.959485   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.959783   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.961410   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961775   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.961802   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961893   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.962065   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.962213   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.962369   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.975560   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0319 20:41:18.976026   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.976503   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.976524   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.976893   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.977128   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.978582   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.978862   60008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:18.978881   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:18.978898   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.981356   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981730   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.981762   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.982056   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.982192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.982337   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:19.126985   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:19.188792   60008 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198961   60008 node_ready.go:49] node "default-k8s-diff-port-385240" has status "Ready":"True"
	I0319 20:41:19.198981   60008 node_ready.go:38] duration metric: took 10.160382ms for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198992   60008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:19.209346   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:19.335212   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:19.414291   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:19.506570   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:19.506590   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:19.651892   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:19.651916   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:19.808237   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:19.808282   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:19.924353   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:20.583635   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169310347s)
	I0319 20:41:20.583700   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.583717   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.583981   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.583991   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.584015   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.584027   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.584253   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.584282   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585518   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250274289s)
	I0319 20:41:20.585568   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585584   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.585855   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.585879   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.585888   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585902   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585916   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.586162   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.586168   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.586177   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.609166   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.609183   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.609453   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.609492   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.609502   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.750409   60008 pod_ready.go:92] pod "coredns-76f75df574-4rq6h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:20.750433   60008 pod_ready.go:81] duration metric: took 1.541065393s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.750442   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.869692   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.869719   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.869995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.870000   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870025   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870045   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.870057   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.870336   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870352   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870366   60008 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:20.872093   60008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:41:20.873465   60008 addons.go:505] duration metric: took 1.966520277s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:41:21.260509   60008 pod_ready.go:92] pod "coredns-76f75df574-swxdt" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.260533   60008 pod_ready.go:81] duration metric: took 510.083899ms for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.260543   60008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268298   60008 pod_ready.go:92] pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.268324   60008 pod_ready.go:81] duration metric: took 7.772878ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268335   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274436   60008 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.274461   60008 pod_ready.go:81] duration metric: took 6.117464ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274472   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281324   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.281347   60008 pod_ready.go:81] duration metric: took 6.866088ms for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281367   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.593980   60008 pod_ready.go:92] pod "kube-proxy-j7ghm" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.594001   60008 pod_ready.go:81] duration metric: took 312.62702ms for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.594009   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993321   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.993346   60008 pod_ready.go:81] duration metric: took 399.330556ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993362   60008 pod_ready.go:38] duration metric: took 2.794359581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:21.993375   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:21.993423   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:22.010583   60008 api_server.go:72] duration metric: took 3.10372573s to wait for apiserver process to appear ...
	I0319 20:41:22.010609   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:22.010629   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:41:22.015218   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:41:22.016276   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:41:22.016291   60008 api_server.go:131] duration metric: took 5.6763ms to wait for apiserver health ...
	I0319 20:41:22.016298   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:22.197418   60008 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:22.197454   60008 system_pods.go:61] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.197460   60008 system_pods.go:61] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.197465   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.197470   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.197476   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.197481   60008 system_pods.go:61] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.197485   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.197494   60008 system_pods.go:61] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.197500   60008 system_pods.go:61] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.197514   60008 system_pods.go:74] duration metric: took 181.210964ms to wait for pod list to return data ...
	I0319 20:41:22.197526   60008 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:22.392702   60008 default_sa.go:45] found service account: "default"
	I0319 20:41:22.392738   60008 default_sa.go:55] duration metric: took 195.195704ms for default service account to be created ...
	I0319 20:41:22.392751   60008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:22.595946   60008 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:22.595975   60008 system_pods.go:89] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.595980   60008 system_pods.go:89] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.595985   60008 system_pods.go:89] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.595991   60008 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.595996   60008 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.596006   60008 system_pods.go:89] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.596010   60008 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.596016   60008 system_pods.go:89] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.596022   60008 system_pods.go:89] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.596034   60008 system_pods.go:126] duration metric: took 203.277741ms to wait for k8s-apps to be running ...
	I0319 20:41:22.596043   60008 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:22.596087   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:22.615372   60008 system_svc.go:56] duration metric: took 19.319488ms WaitForService to wait for kubelet
	I0319 20:41:22.615396   60008 kubeadm.go:576] duration metric: took 3.708546167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:22.615413   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:22.793277   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:22.793303   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:22.793313   60008 node_conditions.go:105] duration metric: took 177.89499ms to run NodePressure ...
	I0319 20:41:22.793325   60008 start.go:240] waiting for startup goroutines ...
	I0319 20:41:22.793331   60008 start.go:245] waiting for cluster config update ...
	I0319 20:41:22.793342   60008 start.go:254] writing updated cluster config ...
	I0319 20:41:22.793598   60008 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:22.845339   60008 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:41:22.847429   60008 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-385240" cluster and "default" namespace by default
	I0319 20:41:29.064044   59019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.598411816s)
	I0319 20:41:29.064115   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:29.082924   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:41:29.095050   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:29.106905   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:29.106918   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:29.106962   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:29.118153   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:29.118209   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:29.128632   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:29.140341   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:29.140401   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:29.151723   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.162305   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:29.162365   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.173654   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:29.185155   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:29.185211   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:29.196015   59019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:29.260934   59019 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0319 20:41:29.261054   59019 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:29.412424   59019 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:29.412592   59019 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:29.412759   59019 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:29.636019   59019 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:29.638046   59019 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:29.638158   59019 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:29.638216   59019 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:29.638279   59019 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:29.638331   59019 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:29.645456   59019 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:29.645553   59019 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:29.645610   59019 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:29.645663   59019 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:29.645725   59019 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:29.645788   59019 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:29.645822   59019 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:29.645869   59019 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:29.895850   59019 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:30.248635   59019 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:30.380474   59019 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:30.457908   59019 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:30.585194   59019 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:30.585852   59019 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:30.588394   59019 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:30.590147   59019 out.go:204]   - Booting up control plane ...
	I0319 20:41:30.590241   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:30.590353   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:30.590606   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:30.611645   59019 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:30.614010   59019 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:30.614266   59019 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:30.757838   59019 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 20:41:30.757973   59019 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0319 20:41:31.758717   59019 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001332477s
	I0319 20:41:31.758819   59019 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 20:41:37.261282   59019 kubeadm.go:309] [api-check] The API server is healthy after 5.50238s
	I0319 20:41:37.275017   59019 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:37.299605   59019 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:37.335190   59019 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:37.335449   59019 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-414130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:37.350882   59019 kubeadm.go:309] [bootstrap-token] Using token: 0euy3c.pb7fih13u47u7k5a
	I0319 20:41:37.352692   59019 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:37.352796   59019 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:37.357551   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:37.365951   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:37.369544   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:37.376066   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:37.379284   59019 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:37.669667   59019 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:38.120423   59019 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:38.668937   59019 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:38.670130   59019 kubeadm.go:309] 
	I0319 20:41:38.670236   59019 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:38.670251   59019 kubeadm.go:309] 
	I0319 20:41:38.670339   59019 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:38.670348   59019 kubeadm.go:309] 
	I0319 20:41:38.670369   59019 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:38.670451   59019 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:38.670520   59019 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:38.670530   59019 kubeadm.go:309] 
	I0319 20:41:38.670641   59019 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:38.670653   59019 kubeadm.go:309] 
	I0319 20:41:38.670720   59019 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:38.670731   59019 kubeadm.go:309] 
	I0319 20:41:38.670802   59019 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:38.670916   59019 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:38.671036   59019 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:38.671053   59019 kubeadm.go:309] 
	I0319 20:41:38.671185   59019 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:38.671332   59019 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:38.671351   59019 kubeadm.go:309] 
	I0319 20:41:38.671438   59019 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671588   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:38.671609   59019 kubeadm.go:309] 	--control-plane 
	I0319 20:41:38.671613   59019 kubeadm.go:309] 
	I0319 20:41:38.671684   59019 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:38.671693   59019 kubeadm.go:309] 
	I0319 20:41:38.671758   59019 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671877   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:38.672172   59019 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:38.672197   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:41:38.672212   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:38.674158   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:38.675618   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:38.690458   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:38.712520   59019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:38.712597   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:38.712616   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-414130 minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=no-preload-414130 minikube.k8s.io/primary=true
	I0319 20:41:38.902263   59019 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:38.902364   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.403054   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.903127   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.402786   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.903358   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.403414   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.902829   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.402506   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.903338   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.402784   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.902477   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.403152   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.903190   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.402544   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.903397   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:46.402785   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:46.902656   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.402845   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.903436   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.402511   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.903073   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.402559   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.902914   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.402708   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.903441   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.403416   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.585670   59019 kubeadm.go:1107] duration metric: took 12.873132825s to wait for elevateKubeSystemPrivileges
	W0319 20:41:51.585714   59019 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:51.585724   59019 kubeadm.go:393] duration metric: took 5m12.093644869s to StartCluster
	I0319 20:41:51.585744   59019 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.585835   59019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:51.588306   59019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.588634   59019 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:51.590331   59019 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:51.588755   59019 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:51.588891   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:41:51.590430   59019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-414130"
	I0319 20:41:51.591988   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:51.592020   59019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-414130"
	W0319 20:41:51.592038   59019 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:51.592069   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.590437   59019 addons.go:69] Setting default-storageclass=true in profile "no-preload-414130"
	I0319 20:41:51.590441   59019 addons.go:69] Setting metrics-server=true in profile "no-preload-414130"
	I0319 20:41:51.592098   59019 addons.go:234] Setting addon metrics-server=true in "no-preload-414130"
	W0319 20:41:51.592114   59019 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:51.592129   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.592164   59019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-414130"
	I0319 20:41:51.592450   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592479   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592505   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592532   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.608909   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0319 20:41:51.609383   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.609942   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.609962   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.610565   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.610774   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.612725   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0319 20:41:51.612794   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0319 20:41:51.613141   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.613637   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.613660   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.614121   59019 addons.go:234] Setting addon default-storageclass=true in "no-preload-414130"
	W0319 20:41:51.614139   59019 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:51.614167   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.614214   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.614482   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614512   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614774   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614810   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614876   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.615336   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.615369   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.615703   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.616237   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.616281   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.630175   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0319 20:41:51.630802   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.631279   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.631296   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.631645   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.632322   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.632356   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.634429   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0319 20:41:51.634865   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.635311   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.635324   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.635922   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.636075   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.637997   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.640025   59019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:51.641428   59019 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:51.641445   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:51.641462   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.644316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644838   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.644853   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644875   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0319 20:41:51.645162   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.645300   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.645365   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.645499   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.645613   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.645964   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.645976   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.646447   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.646663   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.648174   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.649872   59019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:51.651152   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:51.651177   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:51.651197   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.654111   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654523   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.654545   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654792   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.654987   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.655156   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.655281   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.656648   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0319 20:41:51.656960   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.657457   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.657471   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.657751   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.657948   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.659265   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.659503   59019 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:51.659517   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:51.659533   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.662039   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662427   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.662447   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662583   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.662757   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.662879   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.662991   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.845584   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:51.876597   59019 node_ready.go:35] waiting up to 6m0s for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886290   59019 node_ready.go:49] node "no-preload-414130" has status "Ready":"True"
	I0319 20:41:51.886308   59019 node_ready.go:38] duration metric: took 9.684309ms for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886315   59019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:51.893456   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:51.976850   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:52.031123   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:52.031144   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:52.133184   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:52.195945   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:52.195968   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:52.270721   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.270745   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:52.407604   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.578113   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578140   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578511   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578524   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.578532   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.578557   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578566   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578809   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578828   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.610849   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.610873   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.611246   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.611251   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.611269   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.342742   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.209525982s)
	I0319 20:41:53.342797   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.342808   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343131   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343159   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343163   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.343174   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.343194   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343486   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343503   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343525   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.450430   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.450458   59019 pod_ready.go:81] duration metric: took 1.556981953s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.450478   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459425   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.459454   59019 pod_ready.go:81] duration metric: took 8.967211ms for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459467   59019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495144   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.495164   59019 pod_ready.go:81] duration metric: took 35.690498ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495173   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520382   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.520412   59019 pod_ready.go:81] duration metric: took 25.23062ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520426   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530859   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.530889   59019 pod_ready.go:81] duration metric: took 10.451233ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530903   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.545946   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.13830463s)
	I0319 20:41:53.545994   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546009   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546304   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546323   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546333   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546350   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546678   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546695   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546706   59019 addons.go:470] Verifying addon metrics-server=true in "no-preload-414130"
	I0319 20:41:53.546764   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.548523   59019 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0319 20:41:53.549990   59019 addons.go:505] duration metric: took 1.961237309s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0319 20:41:53.881082   59019 pod_ready.go:92] pod "kube-proxy-m7m4h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.881107   59019 pod_ready.go:81] duration metric: took 350.197776ms for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.881116   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283891   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:54.283924   59019 pod_ready.go:81] duration metric: took 402.800741ms for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283936   59019 pod_ready.go:38] duration metric: took 2.397611991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:54.283953   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:54.284016   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:54.304606   59019 api_server.go:72] duration metric: took 2.715931012s to wait for apiserver process to appear ...
	I0319 20:41:54.304629   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:54.304651   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:41:54.309292   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:41:54.310195   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:41:54.310215   59019 api_server.go:131] duration metric: took 5.579162ms to wait for apiserver health ...
	I0319 20:41:54.310225   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:54.488441   59019 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:54.488475   59019 system_pods.go:61] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.488482   59019 system_pods.go:61] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.488487   59019 system_pods.go:61] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.488491   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.488496   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.488500   59019 system_pods.go:61] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.488505   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.488514   59019 system_pods.go:61] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.488520   59019 system_pods.go:61] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.488530   59019 system_pods.go:74] duration metric: took 178.298577ms to wait for pod list to return data ...
	I0319 20:41:54.488543   59019 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:54.679537   59019 default_sa.go:45] found service account: "default"
	I0319 20:41:54.679560   59019 default_sa.go:55] duration metric: took 191.010696ms for default service account to be created ...
	I0319 20:41:54.679569   59019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:54.884163   59019 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:54.884197   59019 system_pods.go:89] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.884205   59019 system_pods.go:89] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.884211   59019 system_pods.go:89] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.884217   59019 system_pods.go:89] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.884223   59019 system_pods.go:89] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.884230   59019 system_pods.go:89] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.884236   59019 system_pods.go:89] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.884246   59019 system_pods.go:89] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.884268   59019 system_pods.go:89] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.884281   59019 system_pods.go:126] duration metric: took 204.70598ms to wait for k8s-apps to be running ...
	I0319 20:41:54.884294   59019 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:54.884348   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:54.901838   59019 system_svc.go:56] duration metric: took 17.536645ms WaitForService to wait for kubelet
	I0319 20:41:54.901869   59019 kubeadm.go:576] duration metric: took 3.313198534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:54.901887   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:55.080463   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:55.080485   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:55.080495   59019 node_conditions.go:105] duration metric: took 178.603035ms to run NodePressure ...
	I0319 20:41:55.080507   59019 start.go:240] waiting for startup goroutines ...
	I0319 20:41:55.080513   59019 start.go:245] waiting for cluster config update ...
	I0319 20:41:55.080523   59019 start.go:254] writing updated cluster config ...
	I0319 20:41:55.080753   59019 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:55.130477   59019 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0319 20:41:55.133906   59019 out.go:177] * Done! kubectl is now configured to use "no-preload-414130" cluster and "default" namespace by default
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 
	
	
	==> CRI-O <==
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.321562236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881571321531973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97dfec57-45b0-49bc-a45f-7a31f34463b4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.322505244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e43e1fb-08c3-4a2c-82fd-90ba4717abd8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.322615398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e43e1fb-08c3-4a2c-82fd-90ba4717abd8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.322651724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4e43e1fb-08c3-4a2c-82fd-90ba4717abd8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.359849475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6725c182-9da4-473c-a1bf-73eaa41bc1bf name=/runtime.v1.RuntimeService/Version
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.359929973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6725c182-9da4-473c-a1bf-73eaa41bc1bf name=/runtime.v1.RuntimeService/Version
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.361559891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf844b67-3704-4d07-a6c8-6dd07d805a33 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.361954647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881571361934251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf844b67-3704-4d07-a6c8-6dd07d805a33 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.362580416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b53d3325-2505-4fd3-a896-0a55f448ed90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.362663348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b53d3325-2505-4fd3-a896-0a55f448ed90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.362705099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b53d3325-2505-4fd3-a896-0a55f448ed90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.406705606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61c873a1-e572-44f0-b419-164174a80347 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.406810936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61c873a1-e572-44f0-b419-164174a80347 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.408302494Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd3ba8f9-c2e1-4574-ae6f-681c296588d7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.408732823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881571408713677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd3ba8f9-c2e1-4574-ae6f-681c296588d7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.409404190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af696f4a-9b30-4577-9573-bc8a9ec14b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.409457863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af696f4a-9b30-4577-9573-bc8a9ec14b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.409500095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=af696f4a-9b30-4577-9573-bc8a9ec14b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.445524502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8941ef03-31eb-404d-8027-c77c983da146 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.445632340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8941ef03-31eb-404d-8027-c77c983da146 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.446998758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49d2fc3c-9a53-47e4-ab17-cf3596dfc251 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.447474101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881571447452724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49d2fc3c-9a53-47e4-ab17-cf3596dfc251 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.447947165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d01da8b-05d0-493c-8a0d-957c1a1e07f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.448030746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d01da8b-05d0-493c-8a0d-957c1a1e07f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:52:51 old-k8s-version-159022 crio[657]: time="2024-03-19 20:52:51.448065178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4d01da8b-05d0-493c-8a0d-957c1a1e07f4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar19 20:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055341] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752911] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.544871] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.711243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.190356] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.060609] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066334] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.201088] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.130943] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.285680] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +7.272629] systemd-fstab-generator[845]: Ignoring "noauto" option for root device
	[  +0.072227] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.223992] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +10.810145] kauditd_printk_skb: 46 callbacks suppressed
	[Mar19 20:39] systemd-fstab-generator[4992]: Ignoring "noauto" option for root device
	[Mar19 20:41] systemd-fstab-generator[5275]: Ignoring "noauto" option for root device
	[  +0.073912] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:52:51 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-159022 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: net.(*sysDialer).dialSerial(0xc000d22900, 0x4f7fe40, 0xc0001abec0, 0xc000332450, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/dial.go:548 +0x152
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: net.(*Dialer).DialContext(0xc000cc17a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000cf1c50, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ccb980, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000cf1c50, 0x24, 0x60, 0x7f65292ad1c8, 0x118, ...)
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: net/http.(*Transport).dial(0xc00084cf00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000cf1c50, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: net/http.(*Transport).dialConn(0xc00084cf00, 0x4f7fe00, 0xc000120018, 0x0, 0xc000c56300, 0x5, 0xc000cf1c50, 0x24, 0x0, 0xc0006ee360, ...)
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: net/http.(*Transport).dialConnFor(0xc00084cf00, 0xc000d0a6e0)
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]: created by net/http.(*Transport).queueForDial
	Mar 19 20:52:49 old-k8s-version-159022 kubelet[6460]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 19 20:52:49 old-k8s-version-159022 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 19 20:52:49 old-k8s-version-159022 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 19 20:52:50 old-k8s-version-159022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 19 20:52:50 old-k8s-version-159022 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 19 20:52:50 old-k8s-version-159022 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 19 20:52:50 old-k8s-version-159022 kubelet[6487]: I0319 20:52:50.291551    6487 server.go:416] Version: v1.20.0
	Mar 19 20:52:50 old-k8s-version-159022 kubelet[6487]: I0319 20:52:50.291942    6487 server.go:837] Client rotation is on, will bootstrap in background
	Mar 19 20:52:50 old-k8s-version-159022 kubelet[6487]: I0319 20:52:50.294410    6487 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 19 20:52:50 old-k8s-version-159022 kubelet[6487]: W0319 20:52:50.296028    6487 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 19 20:52:50 old-k8s-version-159022 kubelet[6487]: I0319 20:52:50.296387    6487 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (241.492036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-159022" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (462.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-421660 -n embed-certs-421660
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-19 20:56:42.090986797 +0000 UTC m=+6726.211606572
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-421660 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-421660 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.702µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-421660 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-421660 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-421660 logs -n 25: (1.796477196s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:54 UTC | 19 Mar 24 20:54 UTC |
	| start   | -p newest-cni-587652 --memory=2200 --alsologtostderr   | newest-cni-587652            | jenkins | v1.32.0 | 19 Mar 24 20:54 UTC | 19 Mar 24 20:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:54 UTC | 19 Mar 24 20:54 UTC |
	| start   | -p auto-378078 --memory=3072                           | auto-378078                  | jenkins | v1.32.0 | 19 Mar 24 20:54 UTC | 19 Mar 24 20:56 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-587652             | newest-cni-587652            | jenkins | v1.32.0 | 19 Mar 24 20:55 UTC | 19 Mar 24 20:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-587652                                   | newest-cni-587652            | jenkins | v1.32.0 | 19 Mar 24 20:55 UTC | 19 Mar 24 20:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-587652                  | newest-cni-587652            | jenkins | v1.32.0 | 19 Mar 24 20:56 UTC | 19 Mar 24 20:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-587652 --memory=2200 --alsologtostderr   | newest-cni-587652            | jenkins | v1.32.0 | 19 Mar 24 20:56 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| ssh     | -p auto-378078 pgrep -a                                | auto-378078                  | jenkins | v1.32.0 | 19 Mar 24 20:56 UTC | 19 Mar 24 20:56 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-378078 sudo cat                                | auto-378078                  | jenkins | v1.32.0 | 19 Mar 24 20:56 UTC | 19 Mar 24 20:56 UTC |
	|         | /etc/nsswitch.conf                                     |                              |         |         |                     |                     |
	| ssh     | -p auto-378078 sudo cat                                | auto-378078                  | jenkins | v1.32.0 | 19 Mar 24 20:56 UTC | 19 Mar 24 20:56 UTC |
	|         | /etc/hosts                                             |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:56:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:56:04.693941   65515 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:56:04.694073   65515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:56:04.694084   65515 out.go:304] Setting ErrFile to fd 2...
	I0319 20:56:04.694091   65515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:56:04.694289   65515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:56:04.694806   65515 out.go:298] Setting JSON to false
	I0319 20:56:04.695658   65515 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9463,"bootTime":1710872302,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:56:04.695718   65515 start.go:139] virtualization: kvm guest
	I0319 20:56:04.698853   65515 out.go:177] * [newest-cni-587652] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:56:04.700472   65515 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:56:04.700497   65515 notify.go:220] Checking for updates...
	I0319 20:56:04.701851   65515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:56:04.703295   65515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:56:04.704625   65515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:56:04.706077   65515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:56:04.707683   65515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:56:04.709490   65515 config.go:182] Loaded profile config "newest-cni-587652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:56:04.710002   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:04.710044   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:04.728636   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0319 20:56:04.729098   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:04.729594   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:04.729619   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:04.729968   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:04.730184   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:04.730448   65515 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:56:04.730878   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:04.730922   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:04.746328   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44979
	I0319 20:56:04.746722   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:04.747285   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:04.747314   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:04.747713   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:04.747902   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:04.784992   65515 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:56:04.786631   65515 start.go:297] selected driver: kvm2
	I0319 20:56:04.786654   65515 start.go:901] validating driver "kvm2" against &{Name:newest-cni-587652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-beta.0 ClusterName:newest-cni-587652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.214 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:56:04.786824   65515 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:56:04.787847   65515 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:56:04.787953   65515 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:56:04.803068   65515 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:56:04.803530   65515 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0319 20:56:04.803616   65515 cni.go:84] Creating CNI manager for ""
	I0319 20:56:04.803639   65515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:56:04.803701   65515 start.go:340] cluster config:
	{Name:newest-cni-587652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-587652 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.214 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:56:04.803830   65515 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:56:04.805524   65515 out.go:177] * Starting "newest-cni-587652" primary control-plane node in "newest-cni-587652" cluster
	I0319 20:56:04.806879   65515 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:56:04.806912   65515 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0319 20:56:04.806932   65515 cache.go:56] Caching tarball of preloaded images
	I0319 20:56:04.807015   65515 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:56:04.807030   65515 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on crio
	I0319 20:56:04.807143   65515 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/config.json ...
	I0319 20:56:04.807352   65515 start.go:360] acquireMachinesLock for newest-cni-587652: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:56:04.807402   65515 start.go:364] duration metric: took 25.32µs to acquireMachinesLock for "newest-cni-587652"
	I0319 20:56:04.807422   65515 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:56:04.807429   65515 fix.go:54] fixHost starting: 
	I0319 20:56:04.807747   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:04.807794   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:04.822341   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I0319 20:56:04.822819   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:04.823352   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:04.823388   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:04.823782   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:04.823993   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:04.824278   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetState
	I0319 20:56:04.826112   65515 fix.go:112] recreateIfNeeded on newest-cni-587652: state=Stopped err=<nil>
	I0319 20:56:04.826141   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	W0319 20:56:04.826306   65515 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:56:04.827958   65515 out.go:177] * Restarting existing kvm2 VM for "newest-cni-587652" ...
	I0319 20:56:07.101622   64917 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:56:07.101700   64917 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:56:07.101787   64917 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:56:07.101955   64917 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:56:07.102108   64917 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:56:07.102196   64917 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:56:07.103732   64917 out.go:204]   - Generating certificates and keys ...
	I0319 20:56:07.103828   64917 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:56:07.103911   64917 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:56:07.104012   64917 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 20:56:07.104102   64917 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 20:56:07.104185   64917 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 20:56:07.104248   64917 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 20:56:07.104349   64917 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 20:56:07.104518   64917 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-378078 localhost] and IPs [192.168.72.51 127.0.0.1 ::1]
	I0319 20:56:07.104595   64917 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 20:56:07.104749   64917 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-378078 localhost] and IPs [192.168.72.51 127.0.0.1 ::1]
	I0319 20:56:07.104873   64917 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 20:56:07.104957   64917 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 20:56:07.105021   64917 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 20:56:07.105127   64917 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:56:07.105197   64917 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:56:07.105266   64917 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:56:07.105340   64917 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:56:07.105424   64917 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:56:07.105491   64917 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:56:07.105594   64917 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:56:07.105677   64917 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:56:07.107171   64917 out.go:204]   - Booting up control plane ...
	I0319 20:56:07.107292   64917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:56:07.107403   64917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:56:07.107520   64917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:56:07.107698   64917 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:56:07.107836   64917 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:56:07.107902   64917 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:56:07.108148   64917 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:56:07.108276   64917 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503231 seconds
	I0319 20:56:07.108411   64917 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:56:07.108589   64917 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:56:07.108688   64917 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:56:07.108933   64917 kubeadm.go:309] [mark-control-plane] Marking the node auto-378078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:56:07.109008   64917 kubeadm.go:309] [bootstrap-token] Using token: mev4ib.0gqmqu1z56y7tv5g
	I0319 20:56:07.111878   64917 out.go:204]   - Configuring RBAC rules ...
	I0319 20:56:07.112016   64917 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:56:07.112126   64917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:56:07.112374   64917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:56:07.112537   64917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:56:07.112623   64917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:56:07.112765   64917 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:56:07.112927   64917 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:56:07.112993   64917 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:56:07.113055   64917 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:56:07.113066   64917 kubeadm.go:309] 
	I0319 20:56:07.113159   64917 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:56:07.113171   64917 kubeadm.go:309] 
	I0319 20:56:07.113278   64917 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:56:07.113289   64917 kubeadm.go:309] 
	I0319 20:56:07.113321   64917 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:56:07.113390   64917 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:56:07.113457   64917 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:56:07.113468   64917 kubeadm.go:309] 
	I0319 20:56:07.113544   64917 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:56:07.113552   64917 kubeadm.go:309] 
	I0319 20:56:07.113590   64917 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:56:07.113596   64917 kubeadm.go:309] 
	I0319 20:56:07.113637   64917 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:56:07.113717   64917 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:56:07.113800   64917 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:56:07.113808   64917 kubeadm.go:309] 
	I0319 20:56:07.113895   64917 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:56:07.113987   64917 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:56:07.113995   64917 kubeadm.go:309] 
	I0319 20:56:07.114093   64917 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mev4ib.0gqmqu1z56y7tv5g \
	I0319 20:56:07.114230   64917 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:56:07.114263   64917 kubeadm.go:309] 	--control-plane 
	I0319 20:56:07.114272   64917 kubeadm.go:309] 
	I0319 20:56:07.114381   64917 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:56:07.114391   64917 kubeadm.go:309] 
	I0319 20:56:07.114495   64917 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mev4ib.0gqmqu1z56y7tv5g \
	I0319 20:56:07.114635   64917 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:56:07.114650   64917 cni.go:84] Creating CNI manager for ""
	I0319 20:56:07.114658   64917 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:56:07.116380   64917 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:56:07.117707   64917 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:56:07.135181   64917 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:56:07.172156   64917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:56:07.172311   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:07.172376   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-378078 minikube.k8s.io/updated_at=2024_03_19T20_56_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=auto-378078 minikube.k8s.io/primary=true
	I0319 20:56:07.481472   64917 ops.go:34] apiserver oom_adj: -16
	I0319 20:56:07.481609   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:07.982456   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:08.482667   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:04.829070   65515 main.go:141] libmachine: (newest-cni-587652) Calling .Start
	I0319 20:56:04.829280   65515 main.go:141] libmachine: (newest-cni-587652) Ensuring networks are active...
	I0319 20:56:04.830051   65515 main.go:141] libmachine: (newest-cni-587652) Ensuring network default is active
	I0319 20:56:04.830370   65515 main.go:141] libmachine: (newest-cni-587652) Ensuring network mk-newest-cni-587652 is active
	I0319 20:56:04.830909   65515 main.go:141] libmachine: (newest-cni-587652) Getting domain xml...
	I0319 20:56:04.831694   65515 main.go:141] libmachine: (newest-cni-587652) Creating domain...
	I0319 20:56:06.143372   65515 main.go:141] libmachine: (newest-cni-587652) Waiting to get IP...
	I0319 20:56:06.144394   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:06.144945   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:06.145021   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:06.144916   65550 retry.go:31] will retry after 309.299512ms: waiting for machine to come up
	I0319 20:56:06.456335   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:06.456979   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:06.457008   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:06.456892   65550 retry.go:31] will retry after 308.946503ms: waiting for machine to come up
	I0319 20:56:06.767669   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:06.768299   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:06.768334   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:06.768229   65550 retry.go:31] will retry after 330.328598ms: waiting for machine to come up
	I0319 20:56:07.099853   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:07.100421   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:07.100444   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:07.100381   65550 retry.go:31] will retry after 598.457015ms: waiting for machine to come up
	I0319 20:56:07.700041   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:07.700471   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:07.700502   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:07.700418   65550 retry.go:31] will retry after 581.423395ms: waiting for machine to come up
	I0319 20:56:08.283177   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:08.283586   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:08.283615   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:08.283555   65550 retry.go:31] will retry after 649.804964ms: waiting for machine to come up
	I0319 20:56:08.935258   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:08.935788   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:08.935819   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:08.935747   65550 retry.go:31] will retry after 725.588037ms: waiting for machine to come up
	I0319 20:56:09.662971   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:09.663471   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:09.663496   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:09.663441   65550 retry.go:31] will retry after 1.101183038s: waiting for machine to come up
	I0319 20:56:08.981857   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:09.482674   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:09.982094   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:10.481807   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:10.982646   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:11.482211   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:11.982025   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:12.481735   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:12.982658   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:13.482478   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:10.765973   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:10.766408   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:10.766441   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:10.766362   65550 retry.go:31] will retry after 1.778090593s: waiting for machine to come up
	I0319 20:56:12.546089   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:12.546518   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:12.546550   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:12.546476   65550 retry.go:31] will retry after 1.798094949s: waiting for machine to come up
	I0319 20:56:14.347577   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:14.348141   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:14.348203   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:14.348085   65550 retry.go:31] will retry after 2.572134448s: waiting for machine to come up
	I0319 20:56:13.982405   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:14.482204   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:14.982706   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:15.481778   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:15.982604   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:16.481707   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:16.982309   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:17.481826   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:17.981878   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:18.482306   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:18.981860   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:19.482217   64917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:56:19.610206   64917 kubeadm.go:1107] duration metric: took 12.437952926s to wait for elevateKubeSystemPrivileges
	W0319 20:56:19.610240   64917 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:56:19.610248   64917 kubeadm.go:393] duration metric: took 23.949413239s to StartCluster
	I0319 20:56:19.610267   64917 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:56:19.610370   64917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:56:19.612164   64917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:56:19.612480   64917 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.72.51 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:56:19.614238   64917 out.go:177] * Verifying Kubernetes components...
	I0319 20:56:19.612540   64917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 20:56:19.612550   64917 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:56:19.612764   64917 config.go:182] Loaded profile config "auto-378078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:56:19.615729   64917 addons.go:69] Setting storage-provisioner=true in profile "auto-378078"
	I0319 20:56:19.615762   64917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:56:19.615771   64917 addons.go:234] Setting addon storage-provisioner=true in "auto-378078"
	I0319 20:56:19.615760   64917 addons.go:69] Setting default-storageclass=true in profile "auto-378078"
	I0319 20:56:19.615802   64917 host.go:66] Checking if "auto-378078" exists ...
	I0319 20:56:19.615820   64917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-378078"
	I0319 20:56:19.616142   64917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:19.616172   64917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:19.616617   64917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:19.616652   64917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:19.631524   64917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0319 20:56:19.631950   64917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
	I0319 20:56:19.632135   64917 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:19.632314   64917 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:19.632749   64917 main.go:141] libmachine: Using API Version  1
	I0319 20:56:19.632817   64917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:19.632872   64917 main.go:141] libmachine: Using API Version  1
	I0319 20:56:19.632883   64917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:19.633180   64917 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:19.633189   64917 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:19.633361   64917 main.go:141] libmachine: (auto-378078) Calling .GetState
	I0319 20:56:19.633774   64917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:19.633824   64917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:19.637387   64917 addons.go:234] Setting addon default-storageclass=true in "auto-378078"
	I0319 20:56:19.637428   64917 host.go:66] Checking if "auto-378078" exists ...
	I0319 20:56:19.637813   64917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:19.637876   64917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:19.649795   64917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43607
	I0319 20:56:19.650199   64917 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:19.650633   64917 main.go:141] libmachine: Using API Version  1
	I0319 20:56:19.650648   64917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:19.651014   64917 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:19.651207   64917 main.go:141] libmachine: (auto-378078) Calling .GetState
	I0319 20:56:19.653168   64917 main.go:141] libmachine: (auto-378078) Calling .DriverName
	I0319 20:56:19.655353   64917 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:56:16.921880   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:16.922429   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:16.922463   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:16.922372   65550 retry.go:31] will retry after 2.455459682s: waiting for machine to come up
	I0319 20:56:19.379951   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:19.380483   65515 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:56:19.380534   65515 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:56:19.380435   65550 retry.go:31] will retry after 3.775494971s: waiting for machine to come up
	I0319 20:56:19.654029   64917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0319 20:56:19.656971   64917 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:56:19.656992   64917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:56:19.657013   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHHostname
	I0319 20:56:19.657182   64917 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:19.657958   64917 main.go:141] libmachine: Using API Version  1
	I0319 20:56:19.657981   64917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:19.658370   64917 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:19.659030   64917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:19.659066   64917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:19.660419   64917 main.go:141] libmachine: (auto-378078) DBG | domain auto-378078 has defined MAC address 52:54:00:b5:43:e5 in network mk-auto-378078
	I0319 20:56:19.660843   64917 main.go:141] libmachine: (auto-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:e5", ip: ""} in network mk-auto-378078: {Iface:virbr4 ExpiryTime:2024-03-19 21:55:35 +0000 UTC Type:0 Mac:52:54:00:b5:43:e5 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:auto-378078 Clientid:01:52:54:00:b5:43:e5}
	I0319 20:56:19.660886   64917 main.go:141] libmachine: (auto-378078) DBG | domain auto-378078 has defined IP address 192.168.72.51 and MAC address 52:54:00:b5:43:e5 in network mk-auto-378078
	I0319 20:56:19.661024   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHPort
	I0319 20:56:19.661198   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHKeyPath
	I0319 20:56:19.661328   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHUsername
	I0319 20:56:19.661486   64917 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/auto-378078/id_rsa Username:docker}
	I0319 20:56:19.677453   64917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I0319 20:56:19.677879   64917 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:19.678303   64917 main.go:141] libmachine: Using API Version  1
	I0319 20:56:19.678324   64917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:19.678667   64917 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:19.678836   64917 main.go:141] libmachine: (auto-378078) Calling .GetState
	I0319 20:56:19.680583   64917 main.go:141] libmachine: (auto-378078) Calling .DriverName
	I0319 20:56:19.680817   64917 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:56:19.680834   64917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:56:19.680850   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHHostname
	I0319 20:56:19.684122   64917 main.go:141] libmachine: (auto-378078) DBG | domain auto-378078 has defined MAC address 52:54:00:b5:43:e5 in network mk-auto-378078
	I0319 20:56:19.684556   64917 main.go:141] libmachine: (auto-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:43:e5", ip: ""} in network mk-auto-378078: {Iface:virbr4 ExpiryTime:2024-03-19 21:55:35 +0000 UTC Type:0 Mac:52:54:00:b5:43:e5 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:auto-378078 Clientid:01:52:54:00:b5:43:e5}
	I0319 20:56:19.684584   64917 main.go:141] libmachine: (auto-378078) DBG | domain auto-378078 has defined IP address 192.168.72.51 and MAC address 52:54:00:b5:43:e5 in network mk-auto-378078
	I0319 20:56:19.684704   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHPort
	I0319 20:56:19.684869   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHKeyPath
	I0319 20:56:19.685014   64917 main.go:141] libmachine: (auto-378078) Calling .GetSSHUsername
	I0319 20:56:19.685131   64917 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/auto-378078/id_rsa Username:docker}
	I0319 20:56:19.968007   64917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:56:19.968359   64917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 20:56:20.028102   64917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:56:20.045446   64917 node_ready.go:35] waiting up to 15m0s for node "auto-378078" to be "Ready" ...
	I0319 20:56:20.049689   64917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:56:20.059259   64917 node_ready.go:49] node "auto-378078" has status "Ready":"True"
	I0319 20:56:20.059279   64917 node_ready.go:38] duration metric: took 13.794374ms for node "auto-378078" to be "Ready" ...
	I0319 20:56:20.059288   64917 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:56:20.067059   64917 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-8b8rn" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:20.593862   64917 start.go:948] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0319 20:56:20.593963   64917 main.go:141] libmachine: Making call to close driver server
	I0319 20:56:20.593988   64917 main.go:141] libmachine: (auto-378078) Calling .Close
	I0319 20:56:20.594245   64917 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:56:20.594272   64917 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:56:20.594287   64917 main.go:141] libmachine: Making call to close driver server
	I0319 20:56:20.594307   64917 main.go:141] libmachine: (auto-378078) Calling .Close
	I0319 20:56:20.594522   64917 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:56:20.594537   64917 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:56:20.609015   64917 main.go:141] libmachine: Making call to close driver server
	I0319 20:56:20.609036   64917 main.go:141] libmachine: (auto-378078) Calling .Close
	I0319 20:56:20.609312   64917 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:56:20.609331   64917 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:56:20.834813   64917 main.go:141] libmachine: Making call to close driver server
	I0319 20:56:20.834840   64917 main.go:141] libmachine: (auto-378078) Calling .Close
	I0319 20:56:20.835107   64917 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:56:20.835128   64917 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:56:20.835138   64917 main.go:141] libmachine: Making call to close driver server
	I0319 20:56:20.835146   64917 main.go:141] libmachine: (auto-378078) Calling .Close
	I0319 20:56:20.835431   64917 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:56:20.835448   64917 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:56:20.835473   64917 main.go:141] libmachine: (auto-378078) DBG | Closing plugin on server side
	I0319 20:56:20.837146   64917 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0319 20:56:20.838448   64917 addons.go:505] duration metric: took 1.22589246s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0319 20:56:21.098494   64917 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-378078" context rescaled to 1 replicas
	I0319 20:56:22.074574   64917 pod_ready.go:102] pod "coredns-76f75df574-8b8rn" in "kube-system" namespace has status "Ready":"False"
	I0319 20:56:22.575005   64917 pod_ready.go:92] pod "coredns-76f75df574-8b8rn" in "kube-system" namespace has status "Ready":"True"
	I0319 20:56:22.575034   64917 pod_ready.go:81] duration metric: took 2.50794851s for pod "coredns-76f75df574-8b8rn" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.575046   64917 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-cqq47" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.576911   64917 pod_ready.go:97] error getting pod "coredns-76f75df574-cqq47" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-cqq47" not found
	I0319 20:56:22.576930   64917 pod_ready.go:81] duration metric: took 1.877945ms for pod "coredns-76f75df574-cqq47" in "kube-system" namespace to be "Ready" ...
	E0319 20:56:22.576938   64917 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-cqq47" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-cqq47" not found
	I0319 20:56:22.576944   64917 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.582586   64917 pod_ready.go:92] pod "etcd-auto-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:56:22.582602   64917 pod_ready.go:81] duration metric: took 5.651893ms for pod "etcd-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.582612   64917 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.587502   64917 pod_ready.go:92] pod "kube-apiserver-auto-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:56:22.587517   64917 pod_ready.go:81] duration metric: took 4.89931ms for pod "kube-apiserver-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.587525   64917 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.591856   64917 pod_ready.go:92] pod "kube-controller-manager-auto-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:56:22.591871   64917 pod_ready.go:81] duration metric: took 4.340954ms for pod "kube-controller-manager-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.591878   64917 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-tp5mh" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.771394   64917 pod_ready.go:92] pod "kube-proxy-tp5mh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:56:22.771417   64917 pod_ready.go:81] duration metric: took 179.532588ms for pod "kube-proxy-tp5mh" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:22.771425   64917 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:23.174245   64917 pod_ready.go:92] pod "kube-scheduler-auto-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:56:23.174273   64917 pod_ready.go:81] duration metric: took 402.840887ms for pod "kube-scheduler-auto-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:56:23.174284   64917 pod_ready.go:38] duration metric: took 3.114982585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:56:23.174302   64917 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:56:23.174363   64917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:56:23.192756   64917 api_server.go:72] duration metric: took 3.580239969s to wait for apiserver process to appear ...
	I0319 20:56:23.192785   64917 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:56:23.192809   64917 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8443/healthz ...
	I0319 20:56:23.198108   64917 api_server.go:279] https://192.168.72.51:8443/healthz returned 200:
	ok
	I0319 20:56:23.199303   64917 api_server.go:141] control plane version: v1.29.3
	I0319 20:56:23.199327   64917 api_server.go:131] duration metric: took 6.534604ms to wait for apiserver health ...
	I0319 20:56:23.199337   64917 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:56:23.373731   64917 system_pods.go:59] 7 kube-system pods found
	I0319 20:56:23.373759   64917 system_pods.go:61] "coredns-76f75df574-8b8rn" [09dcb5a0-1535-4f2f-9f05-afa2aceb7b96] Running
	I0319 20:56:23.373763   64917 system_pods.go:61] "etcd-auto-378078" [ba17f1b5-cb38-4b43-8bfb-a0c6e243c696] Running
	I0319 20:56:23.373767   64917 system_pods.go:61] "kube-apiserver-auto-378078" [2aea5ccb-7524-48f9-ae95-d38d977691c9] Running
	I0319 20:56:23.373770   64917 system_pods.go:61] "kube-controller-manager-auto-378078" [589d2728-d482-4b67-81d9-34e5e84ef10c] Running
	I0319 20:56:23.373773   64917 system_pods.go:61] "kube-proxy-tp5mh" [c6115984-dee3-4508-8a6d-3612c8642322] Running
	I0319 20:56:23.373776   64917 system_pods.go:61] "kube-scheduler-auto-378078" [fa51e898-f8ac-4c6a-9735-7d32fea0d9db] Running
	I0319 20:56:23.373778   64917 system_pods.go:61] "storage-provisioner" [decceacd-7a3f-4c2f-8eaf-dbbbed21a9f8] Running
	I0319 20:56:23.373784   64917 system_pods.go:74] duration metric: took 174.441188ms to wait for pod list to return data ...
	I0319 20:56:23.373791   64917 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:56:23.572438   64917 default_sa.go:45] found service account: "default"
	I0319 20:56:23.572467   64917 default_sa.go:55] duration metric: took 198.670431ms for default service account to be created ...
	I0319 20:56:23.572478   64917 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:56:23.774638   64917 system_pods.go:86] 7 kube-system pods found
	I0319 20:56:23.774669   64917 system_pods.go:89] "coredns-76f75df574-8b8rn" [09dcb5a0-1535-4f2f-9f05-afa2aceb7b96] Running
	I0319 20:56:23.774678   64917 system_pods.go:89] "etcd-auto-378078" [ba17f1b5-cb38-4b43-8bfb-a0c6e243c696] Running
	I0319 20:56:23.774685   64917 system_pods.go:89] "kube-apiserver-auto-378078" [2aea5ccb-7524-48f9-ae95-d38d977691c9] Running
	I0319 20:56:23.774692   64917 system_pods.go:89] "kube-controller-manager-auto-378078" [589d2728-d482-4b67-81d9-34e5e84ef10c] Running
	I0319 20:56:23.774699   64917 system_pods.go:89] "kube-proxy-tp5mh" [c6115984-dee3-4508-8a6d-3612c8642322] Running
	I0319 20:56:23.774705   64917 system_pods.go:89] "kube-scheduler-auto-378078" [fa51e898-f8ac-4c6a-9735-7d32fea0d9db] Running
	I0319 20:56:23.774710   64917 system_pods.go:89] "storage-provisioner" [decceacd-7a3f-4c2f-8eaf-dbbbed21a9f8] Running
	I0319 20:56:23.774719   64917 system_pods.go:126] duration metric: took 202.234091ms to wait for k8s-apps to be running ...
	I0319 20:56:23.774742   64917 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:56:23.774798   64917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:56:23.794155   64917 system_svc.go:56] duration metric: took 19.405164ms WaitForService to wait for kubelet
	I0319 20:56:23.794183   64917 kubeadm.go:576] duration metric: took 4.181671651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:56:23.794205   64917 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:56:23.970903   64917 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:56:23.970927   64917 node_conditions.go:123] node cpu capacity is 2
	I0319 20:56:23.970942   64917 node_conditions.go:105] duration metric: took 176.732223ms to run NodePressure ...
	I0319 20:56:23.970952   64917 start.go:240] waiting for startup goroutines ...
	I0319 20:56:23.970959   64917 start.go:245] waiting for cluster config update ...
	I0319 20:56:23.970968   64917 start.go:254] writing updated cluster config ...
	I0319 20:56:23.971185   64917 ssh_runner.go:195] Run: rm -f paused
	I0319 20:56:24.025429   64917 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:56:24.028741   64917 out.go:177] * Done! kubectl is now configured to use "auto-378078" cluster and "default" namespace by default
	I0319 20:56:23.157278   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.157810   65515 main.go:141] libmachine: (newest-cni-587652) Found IP for machine: 192.168.61.214
	I0319 20:56:23.157848   65515 main.go:141] libmachine: (newest-cni-587652) Reserving static IP address...
	I0319 20:56:23.157870   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has current primary IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.158242   65515 main.go:141] libmachine: (newest-cni-587652) Reserved static IP address: 192.168.61.214
	I0319 20:56:23.158277   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "newest-cni-587652", mac: "52:54:00:13:03:1e", ip: "192.168.61.214"} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.158299   65515 main.go:141] libmachine: (newest-cni-587652) Waiting for SSH to be available...
	I0319 20:56:23.158336   65515 main.go:141] libmachine: (newest-cni-587652) DBG | skip adding static IP to network mk-newest-cni-587652 - found existing host DHCP lease matching {name: "newest-cni-587652", mac: "52:54:00:13:03:1e", ip: "192.168.61.214"}
	I0319 20:56:23.158361   65515 main.go:141] libmachine: (newest-cni-587652) DBG | Getting to WaitForSSH function...
	I0319 20:56:23.160700   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.161076   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.161116   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.161233   65515 main.go:141] libmachine: (newest-cni-587652) DBG | Using SSH client type: external
	I0319 20:56:23.161261   65515 main.go:141] libmachine: (newest-cni-587652) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa (-rw-------)
	I0319 20:56:23.161304   65515 main.go:141] libmachine: (newest-cni-587652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:56:23.161319   65515 main.go:141] libmachine: (newest-cni-587652) DBG | About to run SSH command:
	I0319 20:56:23.161347   65515 main.go:141] libmachine: (newest-cni-587652) DBG | exit 0
	I0319 20:56:23.296683   65515 main.go:141] libmachine: (newest-cni-587652) DBG | SSH cmd err, output: <nil>: 
	I0319 20:56:23.297053   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetConfigRaw
	I0319 20:56:23.297751   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetIP
	I0319 20:56:23.300445   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.300824   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.300862   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.301223   65515 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/config.json ...
	I0319 20:56:23.301461   65515 machine.go:94] provisionDockerMachine start ...
	I0319 20:56:23.301485   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:23.301684   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:23.304186   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.304583   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.304611   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.304774   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:23.304957   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.305137   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.305264   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:23.305438   65515 main.go:141] libmachine: Using SSH client type: native
	I0319 20:56:23.305607   65515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.214 22 <nil> <nil>}
	I0319 20:56:23.305617   65515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:56:23.429421   65515 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:56:23.429458   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetMachineName
	I0319 20:56:23.429766   65515 buildroot.go:166] provisioning hostname "newest-cni-587652"
	I0319 20:56:23.429814   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetMachineName
	I0319 20:56:23.430017   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:23.432901   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.433353   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.433383   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.433577   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:23.433789   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.433968   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.434152   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:23.434388   65515 main.go:141] libmachine: Using SSH client type: native
	I0319 20:56:23.434616   65515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.214 22 <nil> <nil>}
	I0319 20:56:23.434635   65515 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-587652 && echo "newest-cni-587652" | sudo tee /etc/hostname
	I0319 20:56:23.573934   65515 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-587652
	
	I0319 20:56:23.573967   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:23.576652   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.577048   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.577076   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.577324   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:23.577514   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.577717   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.577910   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:23.578080   65515 main.go:141] libmachine: Using SSH client type: native
	I0319 20:56:23.578291   65515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.214 22 <nil> <nil>}
	I0319 20:56:23.578309   65515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-587652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-587652/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-587652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:56:23.707281   65515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:56:23.707310   65515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:56:23.707347   65515 buildroot.go:174] setting up certificates
	I0319 20:56:23.707371   65515 provision.go:84] configureAuth start
	I0319 20:56:23.707388   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetMachineName
	I0319 20:56:23.707711   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetIP
	I0319 20:56:23.710462   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.710878   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.710910   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.711029   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:23.713254   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.713602   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.713631   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.713713   65515 provision.go:143] copyHostCerts
	I0319 20:56:23.713781   65515 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:56:23.713791   65515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:56:23.713871   65515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:56:23.713991   65515 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:56:23.714004   65515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:56:23.714041   65515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:56:23.714108   65515 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:56:23.714117   65515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:56:23.714152   65515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:56:23.714212   65515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.newest-cni-587652 san=[127.0.0.1 192.168.61.214 localhost minikube newest-cni-587652]
	I0319 20:56:23.830168   65515 provision.go:177] copyRemoteCerts
	I0319 20:56:23.830221   65515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:56:23.830256   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:23.832878   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.833256   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:23.833284   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:23.833521   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:23.833746   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:23.833931   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:23.834076   65515 sshutil.go:53] new ssh client: &{IP:192.168.61.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa Username:docker}
	I0319 20:56:23.929665   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:56:23.958848   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:56:23.990377   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:56:24.019519   65515 provision.go:87] duration metric: took 312.130291ms to configureAuth
	I0319 20:56:24.019549   65515 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:56:24.019774   65515 config.go:182] Loaded profile config "newest-cni-587652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:56:24.019867   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:24.023381   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.023970   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:24.024002   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.024151   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:24.024379   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.024548   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.024702   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:24.024878   65515 main.go:141] libmachine: Using SSH client type: native
	I0319 20:56:24.025099   65515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.214 22 <nil> <nil>}
	I0319 20:56:24.025120   65515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:56:24.343002   65515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:56:24.343025   65515 machine.go:97] duration metric: took 1.041547307s to provisionDockerMachine
	I0319 20:56:24.343036   65515 start.go:293] postStartSetup for "newest-cni-587652" (driver="kvm2")
	I0319 20:56:24.343046   65515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:56:24.343059   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:24.343355   65515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:56:24.343375   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:24.345866   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.346170   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:24.346214   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.346386   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:24.346581   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.346754   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:24.346887   65515 sshutil.go:53] new ssh client: &{IP:192.168.61.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa Username:docker}
	I0319 20:56:24.445712   65515 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:56:24.451387   65515 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:56:24.451408   65515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:56:24.451463   65515 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:56:24.451545   65515 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:56:24.451682   65515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:56:24.463435   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:56:24.497530   65515 start.go:296] duration metric: took 154.481327ms for postStartSetup
	I0319 20:56:24.497577   65515 fix.go:56] duration metric: took 19.690149036s for fixHost
	I0319 20:56:24.497602   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:24.500329   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.500732   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:24.500770   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.500939   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:24.501164   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.501392   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.501546   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:24.501717   65515 main.go:141] libmachine: Using SSH client type: native
	I0319 20:56:24.501919   65515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.214 22 <nil> <nil>}
	I0319 20:56:24.501930   65515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:56:24.634321   65515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710881784.612511023
	
	I0319 20:56:24.634350   65515 fix.go:216] guest clock: 1710881784.612511023
	I0319 20:56:24.634369   65515 fix.go:229] Guest: 2024-03-19 20:56:24.612511023 +0000 UTC Remote: 2024-03-19 20:56:24.497582719 +0000 UTC m=+19.857190846 (delta=114.928304ms)
	I0319 20:56:24.634387   65515 fix.go:200] guest clock delta is within tolerance: 114.928304ms
	I0319 20:56:24.634404   65515 start.go:83] releasing machines lock for "newest-cni-587652", held for 19.826983352s
	I0319 20:56:24.634432   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:24.634669   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetIP
	I0319 20:56:24.637502   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.637973   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:24.637996   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.638121   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:24.638714   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:24.638930   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:24.639023   65515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:56:24.639065   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:24.639368   65515 ssh_runner.go:195] Run: cat /version.json
	I0319 20:56:24.639393   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:24.642006   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.642289   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.642350   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:24.642371   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.642556   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:24.642763   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.642919   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:24.643027   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:24.643044   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:24.643071   65515 sshutil.go:53] new ssh client: &{IP:192.168.61.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa Username:docker}
	I0319 20:56:24.643351   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:24.643541   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHKeyPath
	I0319 20:56:24.643703   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHUsername
	I0319 20:56:24.643849   65515 sshutil.go:53] new ssh client: &{IP:192.168.61.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa Username:docker}
	I0319 20:56:24.734429   65515 ssh_runner.go:195] Run: systemctl --version
	I0319 20:56:24.758566   65515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:56:24.911607   65515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:56:24.919516   65515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:56:24.919601   65515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:56:24.938511   65515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:56:24.938539   65515 start.go:494] detecting cgroup driver to use...
	I0319 20:56:24.938621   65515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:56:24.959817   65515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:56:24.974517   65515 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:56:24.974573   65515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:56:24.989529   65515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:56:25.004720   65515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:56:25.129814   65515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:56:25.311298   65515 docker.go:233] disabling docker service ...
	I0319 20:56:25.311356   65515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:56:25.327972   65515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:56:25.346946   65515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:56:25.495253   65515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:56:25.659054   65515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:56:25.679100   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:56:25.703685   65515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:56:25.703752   65515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.716135   65515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:56:25.716204   65515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.731258   65515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.746851   65515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.759994   65515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:56:25.773007   65515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.788452   65515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.813943   65515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:56:25.825942   65515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:56:25.838584   65515 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:56:25.838640   65515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:56:25.857880   65515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:56:25.871155   65515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:56:26.025072   65515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:56:26.220886   65515 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:56:26.220959   65515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:56:26.228221   65515 start.go:562] Will wait 60s for crictl version
	I0319 20:56:26.228297   65515 ssh_runner.go:195] Run: which crictl
	I0319 20:56:26.234142   65515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:56:26.273773   65515 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:56:26.273860   65515 ssh_runner.go:195] Run: crio --version
	I0319 20:56:26.307728   65515 ssh_runner.go:195] Run: crio --version
	I0319 20:56:26.342494   65515 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:56:26.344313   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetIP
	I0319 20:56:26.347609   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:26.348063   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:26.348091   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:26.348356   65515 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:56:26.353163   65515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:56:26.369674   65515 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0319 20:56:26.371099   65515 kubeadm.go:877] updating cluster {Name:newest-cni-587652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:newest-cni-587652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.214 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:56:26.371273   65515 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:56:26.371361   65515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:56:26.419927   65515 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:56:26.420006   65515 ssh_runner.go:195] Run: which lz4
	I0319 20:56:26.424842   65515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:56:26.431598   65515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:56:26.431629   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394394271 bytes)
	I0319 20:56:28.263592   65515 crio.go:462] duration metric: took 1.8387919s to copy over tarball
	I0319 20:56:28.263698   65515 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:56:30.924070   65515 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.660304265s)
	I0319 20:56:30.924102   65515 crio.go:469] duration metric: took 2.660476777s to extract the tarball
	I0319 20:56:30.924111   65515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:56:30.965859   65515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:56:31.021032   65515 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:56:31.021056   65515 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:56:31.021067   65515 kubeadm.go:928] updating node { 192.168.61.214 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:56:31.021204   65515 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-587652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-587652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:56:31.021285   65515 ssh_runner.go:195] Run: crio config
	I0319 20:56:31.084230   65515 cni.go:84] Creating CNI manager for ""
	I0319 20:56:31.084274   65515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:56:31.084292   65515 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0319 20:56:31.084322   65515 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.214 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-587652 NodeName:newest-cni-587652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.61.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:56:31.084532   65515 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-587652"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:56:31.084606   65515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:56:31.095929   65515 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:56:31.095982   65515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:56:31.106495   65515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0319 20:56:31.125869   65515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:56:31.145196   65515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0319 20:56:31.168349   65515 ssh_runner.go:195] Run: grep 192.168.61.214	control-plane.minikube.internal$ /etc/hosts
	I0319 20:56:31.173583   65515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:56:31.190337   65515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:56:31.340556   65515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:56:31.360574   65515 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652 for IP: 192.168.61.214
	I0319 20:56:31.360593   65515 certs.go:194] generating shared ca certs ...
	I0319 20:56:31.360605   65515 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:56:31.360753   65515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:56:31.360801   65515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:56:31.360811   65515 certs.go:256] generating profile certs ...
	I0319 20:56:31.360882   65515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/client.key
	I0319 20:56:31.360933   65515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/apiserver.key.3d15ed0f
	I0319 20:56:31.360975   65515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/proxy-client.key
	I0319 20:56:31.361078   65515 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:56:31.361107   65515 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:56:31.361119   65515 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:56:31.361140   65515 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:56:31.361162   65515 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:56:31.361185   65515 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:56:31.361231   65515 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:56:31.362035   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:56:31.400824   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:56:31.433406   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:56:31.467276   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:56:31.505028   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:56:31.534862   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:56:31.564024   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:56:31.593937   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 20:56:31.622592   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:56:31.650972   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:56:31.678789   65515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:56:31.707561   65515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:56:31.726669   65515 ssh_runner.go:195] Run: openssl version
	I0319 20:56:31.733687   65515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:56:31.748046   65515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:56:31.754633   65515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:56:31.754703   65515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:56:31.761717   65515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:56:31.775151   65515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:56:31.790076   65515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:56:31.795540   65515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:56:31.795595   65515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:56:31.802078   65515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:56:31.816083   65515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:56:31.829582   65515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:56:31.834891   65515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:56:31.834950   65515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:56:31.842099   65515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:56:31.855000   65515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:56:31.860469   65515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:56:31.867601   65515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:56:31.874886   65515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:56:31.882020   65515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:56:31.888616   65515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:56:31.895502   65515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:56:31.902789   65515 kubeadm.go:391] StartCluster: {Name:newest-cni-587652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:newest-cni-587652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.214 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:56:31.902925   65515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:56:31.902973   65515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:56:31.950145   65515 cri.go:89] found id: ""
	I0319 20:56:31.950226   65515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:56:31.962397   65515 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:56:31.962422   65515 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:56:31.962427   65515 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:56:31.962498   65515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:56:31.975823   65515 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:56:31.976779   65515 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-587652" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:56:31.977410   65515 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-587652" cluster setting kubeconfig missing "newest-cni-587652" context setting]
	I0319 20:56:31.978688   65515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:56:31.980323   65515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:56:31.993213   65515 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.214
	I0319 20:56:31.993247   65515 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:56:31.993266   65515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:56:31.993315   65515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:56:32.042153   65515 cri.go:89] found id: ""
	I0319 20:56:32.042216   65515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:56:32.063239   65515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:56:32.075877   65515 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:56:32.075901   65515 kubeadm.go:156] found existing configuration files:
	
	I0319 20:56:32.075944   65515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:56:32.087808   65515 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:56:32.087878   65515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:56:32.100202   65515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:56:32.112991   65515 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:56:32.113043   65515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:56:32.123808   65515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:56:32.134688   65515 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:56:32.134741   65515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:56:32.146337   65515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:56:32.157629   65515 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:56:32.157685   65515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:56:32.170666   65515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:56:32.184843   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:56:32.310749   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:56:33.419557   65515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108770249s)
	I0319 20:56:33.419592   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:56:33.640479   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:56:33.740805   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:56:33.833132   65515 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:56:33.833217   65515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:56:34.333974   65515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:56:34.834205   65515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:56:34.917230   65515 api_server.go:72] duration metric: took 1.084096344s to wait for apiserver process to appear ...
	I0319 20:56:34.917259   65515 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:56:34.917276   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:34.917961   65515 api_server.go:269] stopped: https://192.168.61.214:8443/healthz: Get "https://192.168.61.214:8443/healthz": dial tcp 192.168.61.214:8443: connect: connection refused
	I0319 20:56:35.417429   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:38.206633   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:56:38.206662   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:56:38.206674   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:38.253619   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:56:38.253647   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:56:38.417912   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:38.422808   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:38.422833   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:38.918078   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:38.937740   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:38.937772   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:39.418345   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:39.423530   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:39.423568   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:39.918020   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:39.925564   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:39.925592   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:40.418178   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:40.422985   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:40.423016   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:40.917550   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:40.921814   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:40.921846   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:41.417388   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:41.423817   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:56:41.423853   65515 api_server.go:103] status: https://192.168.61.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:56:41.918285   65515 api_server.go:253] Checking apiserver healthz at https://192.168.61.214:8443/healthz ...
	I0319 20:56:41.923496   65515 api_server.go:279] https://192.168.61.214:8443/healthz returned 200:
	ok
	I0319 20:56:41.932944   65515 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:56:41.932975   65515 api_server.go:131] duration metric: took 7.015708756s to wait for apiserver health ...
	I0319 20:56:41.932986   65515 cni.go:84] Creating CNI manager for ""
	I0319 20:56:41.932994   65515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:56:41.935182   65515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:56:41.936698   65515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:56:41.951544   65515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:56:41.975992   65515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:56:41.997819   65515 system_pods.go:59] 8 kube-system pods found
	I0319 20:56:41.997870   65515 system_pods.go:61] "coredns-7db6d8ff4d-5br2z" [509438f0-85d8-4e9f-8f18-d3bbb7c26689] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:56:41.997892   65515 system_pods.go:61] "etcd-newest-cni-587652" [30465ec4-dbcd-4294-a77b-4fc0ff0f48e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:56:41.997915   65515 system_pods.go:61] "kube-apiserver-newest-cni-587652" [275d31af-0cf1-4fce-a81e-9a7eb97e862d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:56:41.997938   65515 system_pods.go:61] "kube-controller-manager-newest-cni-587652" [d2abb70e-9291-458e-8cdf-7102f615cf8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:56:41.997954   65515 system_pods.go:61] "kube-proxy-gkfl5" [d02f4a59-663e-4135-84b3-da87e22b91ff] Running
	I0319 20:56:41.997968   65515 system_pods.go:61] "kube-scheduler-newest-cni-587652" [651da262-666c-4067-afc5-3e67d994153e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:56:41.997981   65515 system_pods.go:61] "metrics-server-569cc877fc-bpwn8" [abce47be-c903-46c7-9b0a-b27a2d84cb95] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:56:41.997997   65515 system_pods.go:61] "storage-provisioner" [0c516491-ccb5-4ab0-9b69-3d4da7b21ec6] Running
	I0319 20:56:41.998011   65515 system_pods.go:74] duration metric: took 21.989842ms to wait for pod list to return data ...
	I0319 20:56:41.998030   65515 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:56:42.005325   65515 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:56:42.005373   65515 node_conditions.go:123] node cpu capacity is 2
	I0319 20:56:42.005387   65515 node_conditions.go:105] duration metric: took 7.34699ms to run NodePressure ...
	I0319 20:56:42.005405   65515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:56:42.377055   65515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:56:42.395762   65515 ops.go:34] apiserver oom_adj: -16
	I0319 20:56:42.395782   65515 kubeadm.go:591] duration metric: took 10.43334138s to restartPrimaryControlPlane
	I0319 20:56:42.395795   65515 kubeadm.go:393] duration metric: took 10.49301263s to StartCluster
	I0319 20:56:42.395814   65515 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:56:42.395895   65515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:56:42.398575   65515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:56:42.398829   65515 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.214 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:56:42.400801   65515 out.go:177] * Verifying Kubernetes components...
	I0319 20:56:42.398988   65515 config.go:182] Loaded profile config "newest-cni-587652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:56:42.399006   65515 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:56:42.402344   65515 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-587652"
	I0319 20:56:42.402356   65515 addons.go:69] Setting dashboard=true in profile "newest-cni-587652"
	I0319 20:56:42.402376   65515 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-587652"
	W0319 20:56:42.402385   65515 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:56:42.402402   65515 addons.go:234] Setting addon dashboard=true in "newest-cni-587652"
	W0319 20:56:42.402412   65515 addons.go:243] addon dashboard should already be in state true
	I0319 20:56:42.402418   65515 host.go:66] Checking if "newest-cni-587652" exists ...
	I0319 20:56:42.402444   65515 host.go:66] Checking if "newest-cni-587652" exists ...
	I0319 20:56:42.402840   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.402874   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.402883   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.402903   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.402979   65515 addons.go:69] Setting default-storageclass=true in profile "newest-cni-587652"
	I0319 20:56:42.403024   65515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-587652"
	I0319 20:56:42.403414   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.403435   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.403870   65515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:56:42.403965   65515 addons.go:69] Setting metrics-server=true in profile "newest-cni-587652"
	I0319 20:56:42.404026   65515 addons.go:234] Setting addon metrics-server=true in "newest-cni-587652"
	W0319 20:56:42.404032   65515 addons.go:243] addon metrics-server should already be in state true
	I0319 20:56:42.404054   65515 host.go:66] Checking if "newest-cni-587652" exists ...
	I0319 20:56:42.404430   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.404453   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.422335   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0319 20:56:42.422840   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.423380   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.423398   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.423758   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.423915   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetState
	I0319 20:56:42.427965   65515 addons.go:234] Setting addon default-storageclass=true in "newest-cni-587652"
	W0319 20:56:42.427982   65515 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:56:42.428011   65515 host.go:66] Checking if "newest-cni-587652" exists ...
	I0319 20:56:42.428417   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.428459   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.428845   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0319 20:56:42.429240   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.429711   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.429726   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.430077   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.430792   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.430835   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.435501   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I0319 20:56:42.435904   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.436393   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.436411   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.436862   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.437546   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.437585   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.441797   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I0319 20:56:42.442566   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.443089   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.443114   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.444984   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.445605   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.445651   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.452969   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0319 20:56:42.453532   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.454067   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.454094   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.454515   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.455096   65515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:56:42.455118   65515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:56:42.458291   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0319 20:56:42.458723   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.460784   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.460804   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.461425   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.461572   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetState
	I0319 20:56:42.463389   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:42.466141   65515 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0319 20:56:42.468399   65515 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0319 20:56:42.466909   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0319 20:56:42.469618   65515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0319 20:56:42.469702   65515 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0319 20:56:42.469715   65515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0319 20:56:42.469734   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:42.470170   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.470207   65515 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:56:42.470724   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.470743   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.470865   65515 main.go:141] libmachine: Using API Version  1
	I0319 20:56:42.470880   65515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:56:42.471248   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.471425   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetState
	I0319 20:56:42.472515   65515 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:56:42.472769   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetState
	I0319 20:56:42.473696   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:42.473778   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:42.475434   65515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:56:42.474328   65515 main.go:141] libmachine: (newest-cni-587652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:03:1e", ip: ""} in network mk-newest-cni-587652: {Iface:virbr3 ExpiryTime:2024-03-19 21:56:17 +0000 UTC Type:0 Mac:52:54:00:13:03:1e Iaid: IPaddr:192.168.61.214 Prefix:24 Hostname:newest-cni-587652 Clientid:01:52:54:00:13:03:1e}
	I0319 20:56:42.474762   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHPort
	I0319 20:56:42.475177   65515 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:56:42.476882   65515 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:56:42.476903   65515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:56:42.476921   65515 main.go:141] libmachine: (newest-cni-587652) Calling .GetSSHHostname
	I0319 20:56:42.476952   65515 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined IP address 192.168.61.214 and MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:56:42.479447   65515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Mar 19 20:56:42 embed-certs-421660 crio[695]: time="2024-03-19 20:56:42.999714505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881802999686626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3831b1ef-7a1a-4afe-928f-14539c6c9583 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.000719407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd29bf8a-2523-4a55-b622-48c153002d50 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.000772810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd29bf8a-2523-4a55-b622-48c153002d50 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.000968675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd29bf8a-2523-4a55-b622-48c153002d50 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.055148665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da819653-63a5-42be-92b6-474ac29d037d name=/runtime.v1.RuntimeService/Version
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.055380007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da819653-63a5-42be-92b6-474ac29d037d name=/runtime.v1.RuntimeService/Version
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.057124975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b906408c-d0db-4231-91e2-461c453a3000 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.057905208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881803057878261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b906408c-d0db-4231-91e2-461c453a3000 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.059413969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16ff2d5c-15d2-4f45-9d0c-61122ea8fd4b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.059527625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16ff2d5c-15d2-4f45-9d0c-61122ea8fd4b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.059821211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16ff2d5c-15d2-4f45-9d0c-61122ea8fd4b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.135407511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8144b200-e76a-4cda-abc2-544583346142 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.135509735Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8144b200-e76a-4cda-abc2-544583346142 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.137655932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca32a101-74c3-4eec-ad67-8cbc912680f8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.138328090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881803138292256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca32a101-74c3-4eec-ad67-8cbc912680f8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.139144119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3449c68-5af1-447b-b809-2932ddf094d3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.139526861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3449c68-5af1-447b-b809-2932ddf094d3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.139846409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3449c68-5af1-447b-b809-2932ddf094d3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.203637569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc8e80e4-a4e8-46e4-93e1-7ffba6dd95a3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.203789583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc8e80e4-a4e8-46e4-93e1-7ffba6dd95a3 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.205961106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90d34b9c-e9f6-4fd5-a34d-5b1754c95b81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.207137742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881803207100648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90d34b9c-e9f6-4fd5-a34d-5b1754c95b81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.207981067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44e5ca11-cf3f-41c2-9d47-f4d565a059b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.208108140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44e5ca11-cf3f-41c2-9d47-f4d565a059b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:56:43 embed-certs-421660 crio[695]: time="2024-03-19 20:56:43.209354795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880558034950904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84cea0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62044eee90d6f5d3bce593a4edbb449f567dfe5eb1bcd0a03b87ee5b5e887e97,PodSandboxId:d2d82268fa0f01ffce3a8c6dcbec7fa38278ef4f575e55dd3d48a9bb88cc74a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710880537621949506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6a03291-9dc2-4996-b992-a06b76d63603,},Annotations:map[string]string{io.kubernetes.container.hash: beebff31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef,PodSandboxId:27180cf91e8e1677b8781b8301fbb89bd15eb2f5c279831b7726799eb47a2ae8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880534572380623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9tdfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b2be11-82a4-49cd-b937-ed38214db991,},Annotations:map[string]string{io.kubernetes.container.hash: 4e2961d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748,PodSandboxId:25d2326204517466b4ba07f47b169988fb0cb9368117616f345ed1c47d2b6e7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1710880527209884534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvn26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2869d5-3602-4cc0-80
c1-cf01cda5971c,},Annotations:map[string]string{io.kubernetes.container.hash: 830fb647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5,PodSandboxId:0fa6a0f32c877900b799d31559e9389b453b77845620bf4fae11dddda8e08c26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710880527154862478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84b7ff7-ed12-4404-b142-2c331a84ce
a0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a3d1359,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be,PodSandboxId:7a1a317e12f3cbd5742b8a93dff531764d6bc14c7aa0d49c77a6bb8b470f9edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880523510877571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e16d74f0dfae792dc8e306a44f95ea,},Annotat
ions:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3,PodSandboxId:5087175ca6e5422fd8f743d747b6488d90f7b4927ff164b1d6e88541675bd117,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1710880523544495672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa152e179594de88b0dbc118f8
24a12,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166,PodSandboxId:854ff60b1dcd93b721a4faba2e78a805f76b2f392448d857a1cee208f11b56d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880523488694522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f76ecc4dd080e0fbb79023fecccf710,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1aebf1bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8,PodSandboxId:bb1c287d8f38681e94b4e1f06f596eb79d864250c6fe9720f1554ca192fd36a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880523447347535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-421660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1fc9b67fd8c78fe144739d1b32edf3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: a2159969,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44e5ca11-cf3f-41c2-9d47-f4d565a059b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	54948b2ac3f01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   0fa6a0f32c877       storage-provisioner
	62044eee90d6f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   d2d82268fa0f0       busybox
	2b137c65a3111       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   27180cf91e8e1       coredns-76f75df574-9tdfg
	b8bd4bb1ef229       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      21 minutes ago      Running             kube-proxy                1                   25d2326204517       kube-proxy-qvn26
	7cf3f6946847f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   0fa6a0f32c877       storage-provisioner
	33f6eb05f3ff8       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      21 minutes ago      Running             kube-controller-manager   1                   5087175ca6e54       kube-controller-manager-embed-certs-421660
	f6f6bbd4f740d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      21 minutes ago      Running             kube-scheduler            1                   7a1a317e12f3c       kube-scheduler-embed-certs-421660
	e2f9da9940d12       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      21 minutes ago      Running             kube-apiserver            1                   854ff60b1dcd9       kube-apiserver-embed-certs-421660
	c2391bc9672e3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   bb1c287d8f386       etcd-embed-certs-421660
	
	
	==> coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39975 - 28664 "HINFO IN 9037374147638026213.6766847950462541327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019097871s
	
	
	==> describe nodes <==
	Name:               embed-certs-421660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-421660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=embed-certs-421660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_27_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:27:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-421660
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:56:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:56:20 +0000   Tue, 19 Mar 2024 20:27:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:56:20 +0000   Tue, 19 Mar 2024 20:27:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:56:20 +0000   Tue, 19 Mar 2024 20:27:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:56:20 +0000   Tue, 19 Mar 2024 20:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.108
	  Hostname:    embed-certs-421660
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c70488eeac7540f4ac95b35d7265089b
	  System UUID:                c70488ee-ac75-40f4-ac95-b35d7265089b
	  Boot ID:                    05cd69a6-52f3-4411-97cf-09c07d0b0ca4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-9tdfg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-421660                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-421660             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-421660    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-qvn26                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-421660             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-xbh7v               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-421660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-421660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-421660 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-421660 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-421660 event: Registered Node embed-certs-421660 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-421660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-421660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-421660 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-421660 event: Registered Node embed-certs-421660 in Controller
	
	
	==> dmesg <==
	[Mar19 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052182] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042648] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar19 20:35] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.397824] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.650836] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.245942] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.057412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068419] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.184218] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.157233] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.349010] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +4.967584] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.060068] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.983193] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +4.602032] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.507758] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +3.235567] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.107711] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] <==
	{"level":"info","ts":"2024-03-19T20:35:44.871607Z","caller":"traceutil/trace.go:171","msg":"trace[1805808659] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"361.672179ms","start":"2024-03-19T20:35:44.509921Z","end":"2024-03-19T20:35:44.871593Z","steps":["trace[1805808659] 'process raft request'  (duration: 359.4794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:35:44.871722Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:35:44.509907Z","time spent":"361.765396ms","remote":"127.0.0.1:55566","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1310,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-8q6xv\" mod_revision:595 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-8q6xv\" value_size:1251 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-8q6xv\" > >"}
	{"level":"info","ts":"2024-03-19T20:35:44.87202Z","caller":"traceutil/trace.go:171","msg":"trace[1481147434] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"360.1033ms","start":"2024-03-19T20:35:44.511906Z","end":"2024-03-19T20:35:44.872009Z","steps":["trace[1481147434] 'process raft request'  (duration: 357.894982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:35:44.872762Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:35:44.511897Z","time spent":"360.758455ms","remote":"127.0.0.1:55782","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-76f75df574\" mod_revision:596 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" > >"}
	{"level":"info","ts":"2024-03-19T20:36:28.020629Z","caller":"traceutil/trace.go:171","msg":"trace[1562562091] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"156.033295ms","start":"2024-03-19T20:36:27.864564Z","end":"2024-03-19T20:36:28.020598Z","steps":["trace[1562562091] 'process raft request'  (duration: 121.280066ms)","trace[1562562091] 'compare'  (duration: 34.543468ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-19T20:36:29.872293Z","caller":"traceutil/trace.go:171","msg":"trace[1428623987] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"190.34256ms","start":"2024-03-19T20:36:29.68193Z","end":"2024-03-19T20:36:29.872273Z","steps":["trace[1428623987] 'process raft request'  (duration: 190.142198ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:45:25.028517Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-03-19T20:45:25.039708Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":849,"took":"10.549319ms","hash":4051401027,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2539520,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-03-19T20:45:25.0398Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4051401027,"revision":849,"compact-revision":-1}
	{"level":"info","ts":"2024-03-19T20:50:25.038535Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2024-03-19T20:50:25.043585Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1092,"took":"4.396995ms","hash":580358940,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-03-19T20:50:25.043679Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":580358940,"revision":1092,"compact-revision":849}
	{"level":"info","ts":"2024-03-19T20:55:22.117745Z","caller":"traceutil/trace.go:171","msg":"trace[235989316] transaction","detail":"{read_only:false; response_revision:1575; number_of_response:1; }","duration":"106.123023ms","start":"2024-03-19T20:55:22.011581Z","end":"2024-03-19T20:55:22.117704Z","steps":["trace[235989316] 'process raft request'  (duration: 105.889955ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:55:22.181813Z","caller":"traceutil/trace.go:171","msg":"trace[1982279103] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"153.578472ms","start":"2024-03-19T20:55:22.028149Z","end":"2024-03-19T20:55:22.181728Z","steps":["trace[1982279103] 'process raft request'  (duration: 153.273204ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:55:23.099144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.199534ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17708591720234909163 > lease_revoke:<id:75c18e586d067d9e>","response":"size:28"}
	{"level":"info","ts":"2024-03-19T20:55:23.099438Z","caller":"traceutil/trace.go:171","msg":"trace[962031951] linearizableReadLoop","detail":"{readStateIndex:1863; appliedIndex:1862; }","duration":"229.637682ms","start":"2024-03-19T20:55:22.869782Z","end":"2024-03-19T20:55:23.09942Z","steps":["trace[962031951] 'read index received'  (duration: 105.649394ms)","trace[962031951] 'applied index is now lower than readState.Index'  (duration: 123.986698ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:55:23.099612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.787705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:55:23.099675Z","caller":"traceutil/trace.go:171","msg":"trace[101263139] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1576; }","duration":"229.914831ms","start":"2024-03-19T20:55:22.869748Z","end":"2024-03-19T20:55:23.099663Z","steps":["trace[101263139] 'agreement among raft nodes before linearized reading'  (duration: 229.785284ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:55:24.432606Z","caller":"traceutil/trace.go:171","msg":"trace[1399765441] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"304.54357ms","start":"2024-03-19T20:55:24.128039Z","end":"2024-03-19T20:55:24.432583Z","steps":["trace[1399765441] 'process raft request'  (duration: 303.977033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:55:24.432794Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:55:24.127996Z","time spent":"304.682661ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1575 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-19T20:55:25.047004Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1334}
	{"level":"info","ts":"2024-03-19T20:55:25.051636Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1334,"took":"4.040767ms","hash":2779240307,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-03-19T20:55:25.051785Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2779240307,"revision":1334,"compact-revision":1092}
	{"level":"info","ts":"2024-03-19T20:55:54.768707Z","caller":"traceutil/trace.go:171","msg":"trace[448927188] transaction","detail":"{read_only:false; response_revision:1603; number_of_response:1; }","duration":"158.936018ms","start":"2024-03-19T20:55:54.609725Z","end":"2024-03-19T20:55:54.768661Z","steps":["trace[448927188] 'process raft request'  (duration: 158.685278ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:56:33.086667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.171687ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17708591720234909514 > lease_revoke:<id:75c18e586d067ef8>","response":"size:28"}
	
	
	==> kernel <==
	 20:56:43 up 21 min,  0 users,  load average: 0.23, 0.27, 0.25
	Linux embed-certs-421660 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] <==
	I0319 20:51:27.510676       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:53:27.510705       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:53:27.510846       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:53:27.510857       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:53:27.511078       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:53:27.511286       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:53:27.512937       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:55:26.512871       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:55:26.513008       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0319 20:55:27.513348       1 handler_proxy.go:93] no RequestInfo found in the context
	W0319 20:55:27.513345       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:55:27.513602       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:55:27.513629       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0319 20:55:27.513706       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:55:27.515809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:56:27.514438       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:56:27.514834       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:56:27.514899       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:56:27.516283       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:56:27.516394       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:56:27.516428       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] <==
	I0319 20:51:09.562363       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:51:39.058106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:51:39.571118       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:51:57.818674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="279.011µs"
	E0319 20:52:09.063471       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:52:09.581149       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:52:09.820092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="153.587µs"
	E0319 20:52:39.069009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:52:39.589890       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:53:09.076317       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:53:09.597634       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:53:39.082006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:53:39.605431       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:54:09.087244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:54:09.614107       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:54:39.092054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:54:39.622100       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:55:09.097221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:55:09.630471       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:55:39.107871       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:55:39.640715       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:56:09.113076       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:56:09.650241       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:56:39.120763       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:56:39.660490       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] <==
	I0319 20:35:27.364663       1 server_others.go:72] "Using iptables proxy"
	I0319 20:35:27.374604       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.108"]
	I0319 20:35:27.445322       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:35:27.445369       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:35:27.445393       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:35:27.453461       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:35:27.453700       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:35:27.453737       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:35:27.458273       1 config.go:188] "Starting service config controller"
	I0319 20:35:27.459251       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:35:27.458479       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:35:27.459313       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:35:27.458872       1 config.go:315] "Starting node config controller"
	I0319 20:35:27.459322       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:35:27.560053       1 shared_informer.go:318] Caches are synced for node config
	I0319 20:35:27.560104       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:35:27.560126       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] <==
	I0319 20:35:24.781422       1 serving.go:380] Generated self-signed cert in-memory
	W0319 20:35:26.454605       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0319 20:35:26.454811       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:35:26.454848       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0319 20:35:26.454929       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0319 20:35:26.517400       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0319 20:35:26.517448       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:35:26.519757       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 20:35:26.519912       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 20:35:26.519926       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 20:35:26.519965       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0319 20:35:26.621136       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:54:22 embed-certs-421660 kubelet[907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:54:22 embed-certs-421660 kubelet[907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:54:22 embed-certs-421660 kubelet[907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:54:22 embed-certs-421660 kubelet[907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:54:28 embed-certs-421660 kubelet[907]: E0319 20:54:28.802976     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:54:42 embed-certs-421660 kubelet[907]: E0319 20:54:42.802607     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:54:57 embed-certs-421660 kubelet[907]: E0319 20:54:57.802618     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:55:09 embed-certs-421660 kubelet[907]: E0319 20:55:09.802549     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:55:22 embed-certs-421660 kubelet[907]: E0319 20:55:22.805103     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:55:22 embed-certs-421660 kubelet[907]: E0319 20:55:22.828083     907 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:55:22 embed-certs-421660 kubelet[907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:55:22 embed-certs-421660 kubelet[907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:55:22 embed-certs-421660 kubelet[907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:55:22 embed-certs-421660 kubelet[907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:55:37 embed-certs-421660 kubelet[907]: E0319 20:55:37.803539     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:55:52 embed-certs-421660 kubelet[907]: E0319 20:55:52.803994     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:56:03 embed-certs-421660 kubelet[907]: E0319 20:56:03.801974     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:56:16 embed-certs-421660 kubelet[907]: E0319 20:56:16.805297     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:56:22 embed-certs-421660 kubelet[907]: E0319 20:56:22.826712     907 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:56:22 embed-certs-421660 kubelet[907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:56:22 embed-certs-421660 kubelet[907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:56:22 embed-certs-421660 kubelet[907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:56:22 embed-certs-421660 kubelet[907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:56:28 embed-certs-421660 kubelet[907]: E0319 20:56:28.803118     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	Mar 19 20:56:42 embed-certs-421660 kubelet[907]: E0319 20:56:42.804312     907 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbh7v" podUID="7cb1baf4-fcb9-4126-9437-45fc6228821f"
	
	
	==> storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] <==
	I0319 20:35:58.170082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:35:58.184685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:35:58.184879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:36:15.592837       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:36:15.593832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-421660_be25b29b-6025-419b-80ed-f7d6f26cfd68!
	I0319 20:36:15.593543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad8d11a3-ac43-4481-ba89-bd8da41d2da8", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-421660_be25b29b-6025-419b-80ed-f7d6f26cfd68 became leader
	I0319 20:36:15.695128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-421660_be25b29b-6025-419b-80ed-f7d6f26cfd68!
	
	
	==> storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] <==
	I0319 20:35:27.285310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0319 20:35:57.288947       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-421660 -n embed-certs-421660
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-421660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xbh7v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-421660 describe pod metrics-server-57f55c9bc5-xbh7v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-421660 describe pod metrics-server-57f55c9bc5-xbh7v: exit status 1 (86.421608ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xbh7v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-421660 describe pod metrics-server-57f55c9bc5-xbh7v: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (462.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (536.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-19 20:59:21.298078642 +0000 UTC m=+6885.418698422
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-385240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.804µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-385240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-385240 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-385240 logs -n 25: (2.039291142s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-378078 sudo iptables                       | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo docker                         | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo cat                            | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo                                | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo find                           | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p calico-378078 sudo crio                           | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC | 19 Mar 24 20:59 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p calico-378078                                     | calico-378078 | jenkins | v1.32.0 | 19 Mar 24 20:59 UTC |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:58:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:58:27.438790   69942 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:58:27.439083   69942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:58:27.439110   69942 out.go:304] Setting ErrFile to fd 2...
	I0319 20:58:27.439121   69942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:58:27.439308   69942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:58:27.439857   69942 out.go:298] Setting JSON to false
	I0319 20:58:27.440978   69942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9605,"bootTime":1710872302,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:58:27.441060   69942 start.go:139] virtualization: kvm guest
	I0319 20:58:27.443725   69942 out.go:177] * [enable-default-cni-378078] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:58:27.445673   69942 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:58:27.445676   69942 notify.go:220] Checking for updates...
	I0319 20:58:27.447200   69942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:58:27.448790   69942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:58:27.450280   69942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:58:27.451723   69942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:58:27.453643   69942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:58:27.455621   69942 config.go:182] Loaded profile config "calico-378078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:58:27.455771   69942 config.go:182] Loaded profile config "custom-flannel-378078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:58:27.455856   69942 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:58:27.455947   69942 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:58:27.502917   69942 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:58:27.504441   69942 start.go:297] selected driver: kvm2
	I0319 20:58:27.504460   69942 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:58:27.504475   69942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:58:27.505181   69942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:58:27.505252   69942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:58:27.521375   69942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:58:27.521431   69942 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0319 20:58:27.521655   69942 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0319 20:58:27.521682   69942 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:58:27.521751   69942 cni.go:84] Creating CNI manager for "bridge"
	I0319 20:58:27.521766   69942 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 20:58:27.521831   69942 start.go:340] cluster config:
	{Name:enable-default-cni-378078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-378078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:58:27.521960   69942 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:58:27.525731   69942 out.go:177] * Starting "enable-default-cni-378078" primary control-plane node in "enable-default-cni-378078" cluster
	I0319 20:58:27.527049   69942 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:58:27.527084   69942 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:58:27.527093   69942 cache.go:56] Caching tarball of preloaded images
	I0319 20:58:27.527169   69942 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:58:27.527180   69942 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:58:27.527274   69942 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/config.json ...
	I0319 20:58:27.527295   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/config.json: {Name:mk0f86cad74cd40ac0e67776afc52018059b4d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:58:27.527447   69942 start.go:360] acquireMachinesLock for enable-default-cni-378078: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:58:27.527491   69942 start.go:364] duration metric: took 22.199µs to acquireMachinesLock for "enable-default-cni-378078"
	I0319 20:58:27.527513   69942 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-378078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-378078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:58:27.527565   69942 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 20:58:29.588377   67880 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.503921 seconds
	I0319 20:58:29.605109   67880 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:58:29.627828   67880 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:58:30.177739   67880 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:58:30.177984   67880 kubeadm.go:309] [mark-control-plane] Marking the node custom-flannel-378078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:58:30.696821   67880 kubeadm.go:309] [bootstrap-token] Using token: 2mom70.8gm8on0tax2kslwq
	I0319 20:58:26.533516   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:28.534925   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:30.698324   67880 out.go:204]   - Configuring RBAC rules ...
	I0319 20:58:30.698477   67880 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:58:30.706157   67880 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:58:30.719337   67880 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:58:30.725498   67880 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:58:30.734734   67880 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:58:30.740541   67880 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:58:30.772461   67880 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:58:31.044121   67880 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:58:31.113062   67880 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:58:31.114718   67880 kubeadm.go:309] 
	I0319 20:58:31.114820   67880 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:58:31.114832   67880 kubeadm.go:309] 
	I0319 20:58:31.114930   67880 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:58:31.114942   67880 kubeadm.go:309] 
	I0319 20:58:31.114972   67880 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:58:31.115041   67880 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:58:31.115106   67880 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:58:31.115117   67880 kubeadm.go:309] 
	I0319 20:58:31.115203   67880 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:58:31.115214   67880 kubeadm.go:309] 
	I0319 20:58:31.115275   67880 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:58:31.115285   67880 kubeadm.go:309] 
	I0319 20:58:31.115356   67880 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:58:31.115459   67880 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:58:31.115557   67880 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:58:31.115564   67880 kubeadm.go:309] 
	I0319 20:58:31.115632   67880 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:58:31.115693   67880 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:58:31.115697   67880 kubeadm.go:309] 
	I0319 20:58:31.115764   67880 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2mom70.8gm8on0tax2kslwq \
	I0319 20:58:31.115846   67880 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:58:31.115863   67880 kubeadm.go:309] 	--control-plane 
	I0319 20:58:31.115866   67880 kubeadm.go:309] 
	I0319 20:58:31.115957   67880 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:58:31.115962   67880 kubeadm.go:309] 
	I0319 20:58:31.116027   67880 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2mom70.8gm8on0tax2kslwq \
	I0319 20:58:31.116121   67880 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:58:31.118231   67880 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:58:31.118268   67880 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0319 20:58:31.119966   67880 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0319 20:58:27.529361   69942 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0319 20:58:27.529507   69942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:58:27.529558   69942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:58:27.545022   69942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34677
	I0319 20:58:27.545399   69942 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:58:27.545938   69942 main.go:141] libmachine: Using API Version  1
	I0319 20:58:27.545963   69942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:58:27.546365   69942 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:58:27.546571   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetMachineName
	I0319 20:58:27.546817   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:27.546982   69942 start.go:159] libmachine.API.Create for "enable-default-cni-378078" (driver="kvm2")
	I0319 20:58:27.547010   69942 client.go:168] LocalClient.Create starting
	I0319 20:58:27.547072   69942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 20:58:27.547120   69942 main.go:141] libmachine: Decoding PEM data...
	I0319 20:58:27.547141   69942 main.go:141] libmachine: Parsing certificate...
	I0319 20:58:27.547208   69942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 20:58:27.547234   69942 main.go:141] libmachine: Decoding PEM data...
	I0319 20:58:27.547255   69942 main.go:141] libmachine: Parsing certificate...
	I0319 20:58:27.547285   69942 main.go:141] libmachine: Running pre-create checks...
	I0319 20:58:27.547298   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .PreCreateCheck
	I0319 20:58:27.547607   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetConfigRaw
	I0319 20:58:27.547952   69942 main.go:141] libmachine: Creating machine...
	I0319 20:58:27.547963   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .Create
	I0319 20:58:27.548089   69942 main.go:141] libmachine: (enable-default-cni-378078) Creating KVM machine...
	I0319 20:58:27.549458   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found existing default KVM network
	I0319 20:58:27.550745   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:27.550599   69965 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:50:1f} reservation:<nil>}
	I0319 20:58:27.552246   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:27.552144   69965 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000219c70}
	I0319 20:58:27.552286   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | created network xml: 
	I0319 20:58:27.552303   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | <network>
	I0319 20:58:27.552313   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |   <name>mk-enable-default-cni-378078</name>
	I0319 20:58:27.552318   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |   <dns enable='no'/>
	I0319 20:58:27.552323   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |   
	I0319 20:58:27.552329   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0319 20:58:27.552335   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |     <dhcp>
	I0319 20:58:27.552340   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0319 20:58:27.552346   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |     </dhcp>
	I0319 20:58:27.552350   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |   </ip>
	I0319 20:58:27.552355   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG |   
	I0319 20:58:27.552360   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | </network>
	I0319 20:58:27.552366   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | 
	I0319 20:58:27.558094   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | trying to create private KVM network mk-enable-default-cni-378078 192.168.50.0/24...
	I0319 20:58:27.642427   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | private KVM network mk-enable-default-cni-378078 192.168.50.0/24 created
	I0319 20:58:27.642463   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:27.642404   69965 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:58:27.642490   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078 ...
	I0319 20:58:27.642504   69942 main.go:141] libmachine: (enable-default-cni-378078) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 20:58:27.642650   69942 main.go:141] libmachine: (enable-default-cni-378078) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 20:58:27.913351   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:27.913205   69965 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa...
	I0319 20:58:28.561924   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:28.561798   69965 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/enable-default-cni-378078.rawdisk...
	I0319 20:58:28.561959   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Writing magic tar header
	I0319 20:58:28.561979   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Writing SSH key tar header
	I0319 20:58:28.561992   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:28.561945   69965 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078 ...
	I0319 20:58:28.562341   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078 (perms=drwx------)
	I0319 20:58:28.562370   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078
	I0319 20:58:28.562383   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 20:58:28.562397   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 20:58:28.562412   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 20:58:28.562426   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 20:58:28.562442   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:58:28.562474   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 20:58:28.562489   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 20:58:28.562499   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 20:58:28.562511   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home/jenkins
	I0319 20:58:28.562519   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Checking permissions on dir: /home
	I0319 20:58:28.562538   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Skipping /home - not owner
	I0319 20:58:28.562553   69942 main.go:141] libmachine: (enable-default-cni-378078) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 20:58:28.562568   69942 main.go:141] libmachine: (enable-default-cni-378078) Creating domain...
	I0319 20:58:28.563493   69942 main.go:141] libmachine: (enable-default-cni-378078) define libvirt domain using xml: 
	I0319 20:58:28.563512   69942 main.go:141] libmachine: (enable-default-cni-378078) <domain type='kvm'>
	I0319 20:58:28.563530   69942 main.go:141] libmachine: (enable-default-cni-378078)   <name>enable-default-cni-378078</name>
	I0319 20:58:28.563540   69942 main.go:141] libmachine: (enable-default-cni-378078)   <memory unit='MiB'>3072</memory>
	I0319 20:58:28.563549   69942 main.go:141] libmachine: (enable-default-cni-378078)   <vcpu>2</vcpu>
	I0319 20:58:28.563556   69942 main.go:141] libmachine: (enable-default-cni-378078)   <features>
	I0319 20:58:28.563564   69942 main.go:141] libmachine: (enable-default-cni-378078)     <acpi/>
	I0319 20:58:28.563570   69942 main.go:141] libmachine: (enable-default-cni-378078)     <apic/>
	I0319 20:58:28.563579   69942 main.go:141] libmachine: (enable-default-cni-378078)     <pae/>
	I0319 20:58:28.563585   69942 main.go:141] libmachine: (enable-default-cni-378078)     
	I0319 20:58:28.563595   69942 main.go:141] libmachine: (enable-default-cni-378078)   </features>
	I0319 20:58:28.563603   69942 main.go:141] libmachine: (enable-default-cni-378078)   <cpu mode='host-passthrough'>
	I0319 20:58:28.563611   69942 main.go:141] libmachine: (enable-default-cni-378078)   
	I0319 20:58:28.563618   69942 main.go:141] libmachine: (enable-default-cni-378078)   </cpu>
	I0319 20:58:28.563626   69942 main.go:141] libmachine: (enable-default-cni-378078)   <os>
	I0319 20:58:28.563633   69942 main.go:141] libmachine: (enable-default-cni-378078)     <type>hvm</type>
	I0319 20:58:28.563642   69942 main.go:141] libmachine: (enable-default-cni-378078)     <boot dev='cdrom'/>
	I0319 20:58:28.563650   69942 main.go:141] libmachine: (enable-default-cni-378078)     <boot dev='hd'/>
	I0319 20:58:28.563659   69942 main.go:141] libmachine: (enable-default-cni-378078)     <bootmenu enable='no'/>
	I0319 20:58:28.563666   69942 main.go:141] libmachine: (enable-default-cni-378078)   </os>
	I0319 20:58:28.563682   69942 main.go:141] libmachine: (enable-default-cni-378078)   <devices>
	I0319 20:58:28.563690   69942 main.go:141] libmachine: (enable-default-cni-378078)     <disk type='file' device='cdrom'>
	I0319 20:58:28.563703   69942 main.go:141] libmachine: (enable-default-cni-378078)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/boot2docker.iso'/>
	I0319 20:58:28.563715   69942 main.go:141] libmachine: (enable-default-cni-378078)       <target dev='hdc' bus='scsi'/>
	I0319 20:58:28.563721   69942 main.go:141] libmachine: (enable-default-cni-378078)       <readonly/>
	I0319 20:58:28.563726   69942 main.go:141] libmachine: (enable-default-cni-378078)     </disk>
	I0319 20:58:28.563736   69942 main.go:141] libmachine: (enable-default-cni-378078)     <disk type='file' device='disk'>
	I0319 20:58:28.563742   69942 main.go:141] libmachine: (enable-default-cni-378078)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 20:58:28.563750   69942 main.go:141] libmachine: (enable-default-cni-378078)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/enable-default-cni-378078.rawdisk'/>
	I0319 20:58:28.563755   69942 main.go:141] libmachine: (enable-default-cni-378078)       <target dev='hda' bus='virtio'/>
	I0319 20:58:28.563764   69942 main.go:141] libmachine: (enable-default-cni-378078)     </disk>
	I0319 20:58:28.563769   69942 main.go:141] libmachine: (enable-default-cni-378078)     <interface type='network'>
	I0319 20:58:28.563775   69942 main.go:141] libmachine: (enable-default-cni-378078)       <source network='mk-enable-default-cni-378078'/>
	I0319 20:58:28.563780   69942 main.go:141] libmachine: (enable-default-cni-378078)       <model type='virtio'/>
	I0319 20:58:28.563787   69942 main.go:141] libmachine: (enable-default-cni-378078)     </interface>
	I0319 20:58:28.563791   69942 main.go:141] libmachine: (enable-default-cni-378078)     <interface type='network'>
	I0319 20:58:28.563797   69942 main.go:141] libmachine: (enable-default-cni-378078)       <source network='default'/>
	I0319 20:58:28.563803   69942 main.go:141] libmachine: (enable-default-cni-378078)       <model type='virtio'/>
	I0319 20:58:28.563808   69942 main.go:141] libmachine: (enable-default-cni-378078)     </interface>
	I0319 20:58:28.563814   69942 main.go:141] libmachine: (enable-default-cni-378078)     <serial type='pty'>
	I0319 20:58:28.563819   69942 main.go:141] libmachine: (enable-default-cni-378078)       <target port='0'/>
	I0319 20:58:28.563823   69942 main.go:141] libmachine: (enable-default-cni-378078)     </serial>
	I0319 20:58:28.563830   69942 main.go:141] libmachine: (enable-default-cni-378078)     <console type='pty'>
	I0319 20:58:28.563837   69942 main.go:141] libmachine: (enable-default-cni-378078)       <target type='serial' port='0'/>
	I0319 20:58:28.563845   69942 main.go:141] libmachine: (enable-default-cni-378078)     </console>
	I0319 20:58:28.563852   69942 main.go:141] libmachine: (enable-default-cni-378078)     <rng model='virtio'>
	I0319 20:58:28.563861   69942 main.go:141] libmachine: (enable-default-cni-378078)       <backend model='random'>/dev/random</backend>
	I0319 20:58:28.563868   69942 main.go:141] libmachine: (enable-default-cni-378078)     </rng>
	I0319 20:58:28.563880   69942 main.go:141] libmachine: (enable-default-cni-378078)     
	I0319 20:58:28.563887   69942 main.go:141] libmachine: (enable-default-cni-378078)     
	I0319 20:58:28.563895   69942 main.go:141] libmachine: (enable-default-cni-378078)   </devices>
	I0319 20:58:28.563913   69942 main.go:141] libmachine: (enable-default-cni-378078) </domain>
	I0319 20:58:28.563924   69942 main.go:141] libmachine: (enable-default-cni-378078) 
	I0319 20:58:28.568557   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:98:f8:db in network default
	I0319 20:58:28.569327   69942 main.go:141] libmachine: (enable-default-cni-378078) Ensuring networks are active...
	I0319 20:58:28.569350   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:28.570189   69942 main.go:141] libmachine: (enable-default-cni-378078) Ensuring network default is active
	I0319 20:58:28.570627   69942 main.go:141] libmachine: (enable-default-cni-378078) Ensuring network mk-enable-default-cni-378078 is active
	I0319 20:58:28.571325   69942 main.go:141] libmachine: (enable-default-cni-378078) Getting domain xml...
	I0319 20:58:28.572222   69942 main.go:141] libmachine: (enable-default-cni-378078) Creating domain...
	I0319 20:58:30.072968   69942 main.go:141] libmachine: (enable-default-cni-378078) Waiting to get IP...
	I0319 20:58:30.073884   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:30.074381   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:30.074412   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:30.074367   69965 retry.go:31] will retry after 234.02802ms: waiting for machine to come up
	I0319 20:58:30.309590   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:30.310162   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:30.310190   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:30.310109   69965 retry.go:31] will retry after 274.984935ms: waiting for machine to come up
	I0319 20:58:30.586570   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:30.587033   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:30.587098   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:30.586995   69965 retry.go:31] will retry after 453.889578ms: waiting for machine to come up
	I0319 20:58:31.042588   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:31.043277   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:31.043427   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:31.043365   69965 retry.go:31] will retry after 384.615672ms: waiting for machine to come up
	I0319 20:58:31.430278   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:31.430813   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:31.430855   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:31.430782   69965 retry.go:31] will retry after 615.170431ms: waiting for machine to come up
	I0319 20:58:32.047704   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:32.048293   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:32.048353   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:32.048247   69965 retry.go:31] will retry after 874.702145ms: waiting for machine to come up
	I0319 20:58:31.121246   67880 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0319 20:58:31.121303   67880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0319 20:58:31.140595   67880 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0319 20:58:31.140627   67880 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0319 20:58:31.220516   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0319 20:58:31.871012   67880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:58:31.871096   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:31.871096   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-378078 minikube.k8s.io/updated_at=2024_03_19T20_58_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=custom-flannel-378078 minikube.k8s.io/primary=true
	I0319 20:58:31.892837   67880 ops.go:34] apiserver oom_adj: -16
	I0319 20:58:32.044531   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:32.545499   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:33.044907   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:33.545496   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:34.045174   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:31.036024   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:33.511616   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:35.537325   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:32.925028   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:32.925530   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:32.925561   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:32.925480   69965 retry.go:31] will retry after 715.96226ms: waiting for machine to come up
	I0319 20:58:33.643537   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:33.645948   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:33.645972   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:33.644374   69965 retry.go:31] will retry after 1.44371204s: waiting for machine to come up
	I0319 20:58:35.089479   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:35.090073   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:35.090104   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:35.090015   69965 retry.go:31] will retry after 1.507103385s: waiting for machine to come up
	I0319 20:58:36.598976   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:36.599557   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:36.599588   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:36.599482   69965 retry.go:31] will retry after 1.762343644s: waiting for machine to come up
	I0319 20:58:34.544883   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:35.045342   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:35.544882   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:36.044991   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:36.545482   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:37.045441   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:37.545172   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:38.044893   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:38.545577   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:39.044631   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:38.039833   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:40.539017   67533 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:38.363475   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:38.364020   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:38.364163   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:38.364087   69965 retry.go:31] will retry after 2.594819072s: waiting for machine to come up
	I0319 20:58:40.961637   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:40.962215   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:40.962307   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:40.962268   69965 retry.go:31] will retry after 2.740414993s: waiting for machine to come up
	I0319 20:58:39.545469   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:40.044580   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:40.545224   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:41.045538   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:41.544647   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:42.045256   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:42.545460   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:43.045229   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:43.545269   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:44.045165   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:44.545250   67880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:58:44.721069   67880 kubeadm.go:1107] duration metric: took 12.850043512s to wait for elevateKubeSystemPrivileges
	W0319 20:58:44.721114   67880 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:58:44.721125   67880 kubeadm.go:393] duration metric: took 26.924459038s to StartCluster
	I0319 20:58:44.721144   67880 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:58:44.721243   67880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:58:44.722680   67880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:58:44.722941   67880 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.72.179 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:58:44.724832   67880 out.go:177] * Verifying Kubernetes components...
	I0319 20:58:44.723056   67880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 20:58:44.723104   67880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:58:44.723245   67880 config.go:182] Loaded profile config "custom-flannel-378078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:58:44.724949   67880 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-378078"
	I0319 20:58:44.724972   67880 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-378078"
	I0319 20:58:44.726425   67880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:58:44.725010   67880 host.go:66] Checking if "custom-flannel-378078" exists ...
	I0319 20:58:44.725019   67880 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-378078"
	I0319 20:58:44.726638   67880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-378078"
	I0319 20:58:44.726897   67880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:58:44.726928   67880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:58:44.727033   67880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:58:44.727069   67880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:58:44.748113   67880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0319 20:58:44.748350   67880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0319 20:58:44.748924   67880 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:58:44.749015   67880 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:58:44.749522   67880 main.go:141] libmachine: Using API Version  1
	I0319 20:58:44.749543   67880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:58:44.749771   67880 main.go:141] libmachine: Using API Version  1
	I0319 20:58:44.749819   67880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:58:44.749878   67880 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:58:44.750178   67880 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:58:44.750383   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetState
	I0319 20:58:44.750503   67880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:58:44.750535   67880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:58:44.754423   67880 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-378078"
	I0319 20:58:44.754469   67880 host.go:66] Checking if "custom-flannel-378078" exists ...
	I0319 20:58:44.754886   67880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:58:44.754910   67880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:58:44.771521   67880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0319 20:58:44.772088   67880 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:58:44.772618   67880 main.go:141] libmachine: Using API Version  1
	I0319 20:58:44.772640   67880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:58:44.773097   67880 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:58:44.773353   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetState
	I0319 20:58:44.774231   67880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0319 20:58:44.774662   67880 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:58:44.775102   67880 main.go:141] libmachine: Using API Version  1
	I0319 20:58:44.775114   67880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:58:44.775604   67880 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:58:44.776053   67880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:58:44.776070   67880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:58:44.776304   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .DriverName
	I0319 20:58:44.778544   67880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:58:43.035498   67533 pod_ready.go:92] pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:43.035523   67533 pod_ready.go:81] duration metric: took 18.510137438s for pod "calico-kube-controllers-5fc7d6cf67-55tkv" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.035536   67533 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-hrpfl" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.544091   67533 pod_ready.go:92] pod "calico-node-hrpfl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:43.544118   67533 pod_ready.go:81] duration metric: took 508.574196ms for pod "calico-node-hrpfl" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.544131   67533 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-bqrcb" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.553726   67533 pod_ready.go:92] pod "coredns-76f75df574-bqrcb" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:43.553752   67533 pod_ready.go:81] duration metric: took 9.611802ms for pod "coredns-76f75df574-bqrcb" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.553767   67533 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.563969   67533 pod_ready.go:92] pod "etcd-calico-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:43.563997   67533 pod_ready.go:81] duration metric: took 10.222115ms for pod "etcd-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.564010   67533 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.571384   67533 pod_ready.go:92] pod "kube-apiserver-calico-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:43.571403   67533 pod_ready.go:81] duration metric: took 7.386003ms for pod "kube-apiserver-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.571413   67533 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.831306   67533 pod_ready.go:92] pod "kube-controller-manager-calico-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:43.831327   67533 pod_ready.go:81] duration metric: took 259.90785ms for pod "kube-controller-manager-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:43.831337   67533 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9bspv" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:44.229656   67533 pod_ready.go:92] pod "kube-proxy-9bspv" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:44.229682   67533 pod_ready.go:81] duration metric: took 398.33856ms for pod "kube-proxy-9bspv" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:44.229695   67533 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:44.631614   67533 pod_ready.go:92] pod "kube-scheduler-calico-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:58:44.631645   67533 pod_ready.go:81] duration metric: took 401.940067ms for pod "kube-scheduler-calico-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:44.631660   67533 pod_ready.go:38] duration metric: took 20.118301446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:58:44.631678   67533 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:58:44.631740   67533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:58:44.650306   67533 api_server.go:72] duration metric: took 32.457421465s to wait for apiserver process to appear ...
	I0319 20:58:44.650339   67533 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:58:44.650362   67533 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I0319 20:58:44.655649   67533 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I0319 20:58:44.657346   67533 api_server.go:141] control plane version: v1.29.3
	I0319 20:58:44.657368   67533 api_server.go:131] duration metric: took 7.022023ms to wait for apiserver health ...
	I0319 20:58:44.657376   67533 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:58:44.835119   67533 system_pods.go:59] 9 kube-system pods found
	I0319 20:58:44.835154   67533 system_pods.go:61] "calico-kube-controllers-5fc7d6cf67-55tkv" [c3332a7f-6afb-4617-96fa-b42202b2387d] Running
	I0319 20:58:44.835162   67533 system_pods.go:61] "calico-node-hrpfl" [4bb20dff-00f8-4f0c-be65-9687ebac46d5] Running
	I0319 20:58:44.835167   67533 system_pods.go:61] "coredns-76f75df574-bqrcb" [d232f033-15d9-4cd2-bbc2-893ef6eb4c8c] Running
	I0319 20:58:44.835172   67533 system_pods.go:61] "etcd-calico-378078" [c15b3e7a-d761-41e5-99be-a536dbdb85ac] Running
	I0319 20:58:44.835177   67533 system_pods.go:61] "kube-apiserver-calico-378078" [294c95c9-3160-46e4-8974-8c0c770c5fa0] Running
	I0319 20:58:44.835181   67533 system_pods.go:61] "kube-controller-manager-calico-378078" [9822c505-51a7-4924-b6c9-03956a8c4d4a] Running
	I0319 20:58:44.835188   67533 system_pods.go:61] "kube-proxy-9bspv" [043e01d7-b93c-4004-9b02-d4f9c7d2d1cb] Running
	I0319 20:58:44.835193   67533 system_pods.go:61] "kube-scheduler-calico-378078" [a8762b76-0a05-42b4-b85f-213ec371fee1] Running
	I0319 20:58:44.835201   67533 system_pods.go:61] "storage-provisioner" [7b5ca648-8a55-4759-9e0d-ce6e77d4976e] Running
	I0319 20:58:44.835208   67533 system_pods.go:74] duration metric: took 177.827312ms to wait for pod list to return data ...
	I0319 20:58:44.835218   67533 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:58:45.030129   67533 default_sa.go:45] found service account: "default"
	I0319 20:58:45.030156   67533 default_sa.go:55] duration metric: took 194.928479ms for default service account to be created ...
	I0319 20:58:45.030166   67533 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:58:45.245266   67533 system_pods.go:86] 9 kube-system pods found
	I0319 20:58:45.245306   67533 system_pods.go:89] "calico-kube-controllers-5fc7d6cf67-55tkv" [c3332a7f-6afb-4617-96fa-b42202b2387d] Running
	I0319 20:58:45.245315   67533 system_pods.go:89] "calico-node-hrpfl" [4bb20dff-00f8-4f0c-be65-9687ebac46d5] Running
	I0319 20:58:45.245321   67533 system_pods.go:89] "coredns-76f75df574-bqrcb" [d232f033-15d9-4cd2-bbc2-893ef6eb4c8c] Running
	I0319 20:58:45.245328   67533 system_pods.go:89] "etcd-calico-378078" [c15b3e7a-d761-41e5-99be-a536dbdb85ac] Running
	I0319 20:58:45.245335   67533 system_pods.go:89] "kube-apiserver-calico-378078" [294c95c9-3160-46e4-8974-8c0c770c5fa0] Running
	I0319 20:58:45.245342   67533 system_pods.go:89] "kube-controller-manager-calico-378078" [9822c505-51a7-4924-b6c9-03956a8c4d4a] Running
	I0319 20:58:45.245348   67533 system_pods.go:89] "kube-proxy-9bspv" [043e01d7-b93c-4004-9b02-d4f9c7d2d1cb] Running
	I0319 20:58:45.245357   67533 system_pods.go:89] "kube-scheduler-calico-378078" [a8762b76-0a05-42b4-b85f-213ec371fee1] Running
	I0319 20:58:45.245363   67533 system_pods.go:89] "storage-provisioner" [7b5ca648-8a55-4759-9e0d-ce6e77d4976e] Running
	I0319 20:58:45.245374   67533 system_pods.go:126] duration metric: took 215.201214ms to wait for k8s-apps to be running ...
	I0319 20:58:45.245387   67533 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:58:45.245440   67533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:58:45.267856   67533 system_svc.go:56] duration metric: took 22.458778ms WaitForService to wait for kubelet
	I0319 20:58:45.267891   67533 kubeadm.go:576] duration metric: took 33.075012007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:58:45.267915   67533 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:58:45.431226   67533 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:58:45.431286   67533 node_conditions.go:123] node cpu capacity is 2
	I0319 20:58:45.431302   67533 node_conditions.go:105] duration metric: took 163.380766ms to run NodePressure ...
	I0319 20:58:45.431317   67533 start.go:240] waiting for startup goroutines ...
	I0319 20:58:45.431328   67533 start.go:245] waiting for cluster config update ...
	I0319 20:58:45.431342   67533 start.go:254] writing updated cluster config ...
	I0319 20:58:45.431720   67533 ssh_runner.go:195] Run: rm -f paused
	I0319 20:58:45.487213   67533 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:58:45.489246   67533 out.go:177] * Done! kubectl is now configured to use "calico-378078" cluster and "default" namespace by default
	I0319 20:58:44.779979   67880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:58:44.779998   67880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:58:44.780016   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHHostname
	I0319 20:58:44.783092   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | domain custom-flannel-378078 has defined MAC address 52:54:00:7d:10:66 in network mk-custom-flannel-378078
	I0319 20:58:44.783607   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:10:66", ip: ""} in network mk-custom-flannel-378078: {Iface:virbr4 ExpiryTime:2024-03-19 21:57:57 +0000 UTC Type:0 Mac:52:54:00:7d:10:66 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:custom-flannel-378078 Clientid:01:52:54:00:7d:10:66}
	I0319 20:58:44.783633   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | domain custom-flannel-378078 has defined IP address 192.168.72.179 and MAC address 52:54:00:7d:10:66 in network mk-custom-flannel-378078
	I0319 20:58:44.783876   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHPort
	I0319 20:58:44.784060   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHKeyPath
	I0319 20:58:44.784282   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHUsername
	I0319 20:58:44.784429   67880 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/custom-flannel-378078/id_rsa Username:docker}
	I0319 20:58:44.794341   67880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0319 20:58:44.794957   67880 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:58:44.795442   67880 main.go:141] libmachine: Using API Version  1
	I0319 20:58:44.795455   67880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:58:44.795921   67880 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:58:44.796111   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetState
	I0319 20:58:44.797773   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .DriverName
	I0319 20:58:44.798004   67880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:58:44.798015   67880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:58:44.798039   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHHostname
	I0319 20:58:44.800253   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | domain custom-flannel-378078 has defined MAC address 52:54:00:7d:10:66 in network mk-custom-flannel-378078
	I0319 20:58:44.800659   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:10:66", ip: ""} in network mk-custom-flannel-378078: {Iface:virbr4 ExpiryTime:2024-03-19 21:57:57 +0000 UTC Type:0 Mac:52:54:00:7d:10:66 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:custom-flannel-378078 Clientid:01:52:54:00:7d:10:66}
	I0319 20:58:44.800678   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | domain custom-flannel-378078 has defined IP address 192.168.72.179 and MAC address 52:54:00:7d:10:66 in network mk-custom-flannel-378078
	I0319 20:58:44.800786   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHPort
	I0319 20:58:44.800958   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHKeyPath
	I0319 20:58:44.801096   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .GetSSHUsername
	I0319 20:58:44.801229   67880 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/custom-flannel-378078/id_rsa Username:docker}
	I0319 20:58:45.137524   67880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:58:45.137583   67880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 20:58:45.144733   67880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:58:45.151121   67880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:58:45.604190   67880 start.go:948] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0319 20:58:45.605562   67880 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-378078" to be "Ready" ...
	I0319 20:58:45.851507   67880 main.go:141] libmachine: Making call to close driver server
	I0319 20:58:45.851535   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .Close
	I0319 20:58:45.851711   67880 main.go:141] libmachine: Making call to close driver server
	I0319 20:58:45.851737   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .Close
	I0319 20:58:45.851811   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | Closing plugin on server side
	I0319 20:58:45.851825   67880 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:58:45.851889   67880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:58:45.851941   67880 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:58:45.851962   67880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:58:45.851963   67880 main.go:141] libmachine: Making call to close driver server
	I0319 20:58:45.851972   67880 main.go:141] libmachine: Making call to close driver server
	I0319 20:58:45.851974   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .Close
	I0319 20:58:45.851981   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .Close
	I0319 20:58:45.851942   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | Closing plugin on server side
	I0319 20:58:45.852287   67880 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:58:45.852305   67880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:58:45.852312   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | Closing plugin on server side
	I0319 20:58:45.852338   67880 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:58:45.852351   67880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:58:45.852369   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | Closing plugin on server side
	I0319 20:58:45.865189   67880 main.go:141] libmachine: Making call to close driver server
	I0319 20:58:45.865206   67880 main.go:141] libmachine: (custom-flannel-378078) Calling .Close
	I0319 20:58:45.865444   67880 main.go:141] libmachine: (custom-flannel-378078) DBG | Closing plugin on server side
	I0319 20:58:45.865450   67880 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:58:45.865466   67880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:58:45.867175   67880 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0319 20:58:43.704466   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:43.705038   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:43.705064   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:43.704982   69965 retry.go:31] will retry after 3.168065778s: waiting for machine to come up
	I0319 20:58:46.876382   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:46.876927   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:46.876956   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:46.876864   69965 retry.go:31] will retry after 3.778631726s: waiting for machine to come up
	I0319 20:58:45.868361   67880 addons.go:505] duration metric: took 1.145284638s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0319 20:58:46.109135   67880 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-378078" context rescaled to 1 replicas
	I0319 20:58:47.609753   67880 node_ready.go:53] node "custom-flannel-378078" has status "Ready":"False"
	I0319 20:58:50.657816   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:50.658387   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find current IP address of domain enable-default-cni-378078 in network mk-enable-default-cni-378078
	I0319 20:58:50.658440   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | I0319 20:58:50.658339   69965 retry.go:31] will retry after 6.211911188s: waiting for machine to come up
	I0319 20:58:49.610444   67880 node_ready.go:53] node "custom-flannel-378078" has status "Ready":"False"
	I0319 20:58:52.110834   67880 node_ready.go:53] node "custom-flannel-378078" has status "Ready":"False"
	I0319 20:58:56.873420   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:56.873985   69942 main.go:141] libmachine: (enable-default-cni-378078) Found IP for machine: 192.168.50.168
	I0319 20:58:56.874010   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has current primary IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:56.874019   69942 main.go:141] libmachine: (enable-default-cni-378078) Reserving static IP address...
	I0319 20:58:56.874388   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-378078", mac: "52:54:00:d5:38:cf", ip: "192.168.50.168"} in network mk-enable-default-cni-378078
	I0319 20:58:56.950957   69942 main.go:141] libmachine: (enable-default-cni-378078) Reserved static IP address: 192.168.50.168
	I0319 20:58:56.950985   69942 main.go:141] libmachine: (enable-default-cni-378078) Waiting for SSH to be available...
	I0319 20:58:56.950995   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Getting to WaitForSSH function...
	I0319 20:58:56.953663   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:56.954127   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:56.954163   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:56.954369   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Using SSH client type: external
	I0319 20:58:56.954413   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa (-rw-------)
	I0319 20:58:56.954455   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:58:56.954473   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | About to run SSH command:
	I0319 20:58:56.954488   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | exit 0
	I0319 20:58:57.081813   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | SSH cmd err, output: <nil>: 
	I0319 20:58:57.082250   69942 main.go:141] libmachine: (enable-default-cni-378078) KVM machine creation complete!
	I0319 20:58:57.082555   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetConfigRaw
	I0319 20:58:57.083176   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:57.083413   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:57.083594   69942 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0319 20:58:57.083613   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetState
	I0319 20:58:57.085028   69942 main.go:141] libmachine: Detecting operating system of created instance...
	I0319 20:58:57.085045   69942 main.go:141] libmachine: Waiting for SSH to be available...
	I0319 20:58:57.085054   69942 main.go:141] libmachine: Getting to WaitForSSH function...
	I0319 20:58:57.085063   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:57.087881   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.088376   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.088408   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.088589   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:57.088792   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.088943   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.089090   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:57.089267   69942 main.go:141] libmachine: Using SSH client type: native
	I0319 20:58:57.089476   69942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0319 20:58:57.089491   69942 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0319 20:58:57.196300   69942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:58:57.196337   69942 main.go:141] libmachine: Detecting the provisioner...
	I0319 20:58:57.196349   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:57.199270   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.199649   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.199680   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.199933   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:57.200149   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.200351   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.200507   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:57.200674   69942 main.go:141] libmachine: Using SSH client type: native
	I0319 20:58:57.200847   69942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0319 20:58:57.200858   69942 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0319 20:58:57.309893   69942 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0319 20:58:57.309984   69942 main.go:141] libmachine: found compatible host: buildroot
	I0319 20:58:57.309998   69942 main.go:141] libmachine: Provisioning with buildroot...
	I0319 20:58:57.310008   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetMachineName
	I0319 20:58:57.310275   69942 buildroot.go:166] provisioning hostname "enable-default-cni-378078"
	I0319 20:58:57.310304   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetMachineName
	I0319 20:58:57.310514   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:57.313282   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.313736   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.313766   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.313872   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:57.314057   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.314234   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.314370   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:57.314576   69942 main.go:141] libmachine: Using SSH client type: native
	I0319 20:58:57.314801   69942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0319 20:58:57.314827   69942 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-378078 && echo "enable-default-cni-378078" | sudo tee /etc/hostname
	I0319 20:58:57.437390   69942 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-378078
	
	I0319 20:58:57.437418   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:54.623343   67880 node_ready.go:49] node "custom-flannel-378078" has status "Ready":"True"
	I0319 20:58:54.623383   67880 node_ready.go:38] duration metric: took 9.017780451s for node "custom-flannel-378078" to be "Ready" ...
	I0319 20:58:54.623400   67880 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:58:54.638697   67880 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-trnlk" in "kube-system" namespace to be "Ready" ...
	I0319 20:58:56.647854   67880 pod_ready.go:102] pod "coredns-76f75df574-trnlk" in "kube-system" namespace has status "Ready":"False"
	I0319 20:58:57.440455   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.441049   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:57.441511   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.441562   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.441688   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.441853   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.441989   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:57.442121   69942 main.go:141] libmachine: Using SSH client type: native
	I0319 20:58:57.442273   69942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0319 20:58:57.442291   69942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-378078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-378078/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-378078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:58:57.560298   69942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:58:57.560329   69942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:58:57.560371   69942 buildroot.go:174] setting up certificates
	I0319 20:58:57.560386   69942 provision.go:84] configureAuth start
	I0319 20:58:57.560401   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetMachineName
	I0319 20:58:57.560666   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetIP
	I0319 20:58:57.563695   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.564135   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.564165   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.564392   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:57.566364   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.566818   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.566852   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.566979   69942 provision.go:143] copyHostCerts
	I0319 20:58:57.567042   69942 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:58:57.567057   69942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:58:57.567134   69942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:58:57.567255   69942 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:58:57.567267   69942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:58:57.567303   69942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:58:57.567390   69942 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:58:57.567401   69942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:58:57.567433   69942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:58:57.567521   69942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-378078 san=[127.0.0.1 192.168.50.168 enable-default-cni-378078 localhost minikube]
	I0319 20:58:57.688673   69942 provision.go:177] copyRemoteCerts
	I0319 20:58:57.688721   69942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:58:57.688745   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:57.691396   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.691760   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.691790   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.691908   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:57.692084   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.692232   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:57.692385   69942 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa Username:docker}
	I0319 20:58:57.775726   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:58:57.805165   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0319 20:58:57.832876   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:58:57.861259   69942 provision.go:87] duration metric: took 300.858337ms to configureAuth
	I0319 20:58:57.861283   69942 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:58:57.861519   69942 config.go:182] Loaded profile config "enable-default-cni-378078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:58:57.861613   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:57.864563   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.865082   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:57.865123   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:57.865269   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:57.865479   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.865659   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:57.865817   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:57.866043   69942 main.go:141] libmachine: Using SSH client type: native
	I0319 20:58:57.866226   69942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0319 20:58:57.866243   69942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:58:58.154670   69942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:58:58.154697   69942 main.go:141] libmachine: Checking connection to Docker...
	I0319 20:58:58.154707   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetURL
	I0319 20:58:58.156101   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | Using libvirt version 6000000
	I0319 20:58:58.158815   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.159180   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.159225   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.159471   69942 main.go:141] libmachine: Docker is up and running!
	I0319 20:58:58.159482   69942 main.go:141] libmachine: Reticulating splines...
	I0319 20:58:58.159489   69942 client.go:171] duration metric: took 30.612469492s to LocalClient.Create
	I0319 20:58:58.159507   69942 start.go:167] duration metric: took 30.612528612s to libmachine.API.Create "enable-default-cni-378078"
	I0319 20:58:58.159517   69942 start.go:293] postStartSetup for "enable-default-cni-378078" (driver="kvm2")
	I0319 20:58:58.159526   69942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:58:58.159541   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:58.159774   69942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:58:58.159804   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:58.162033   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.162350   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.162378   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.162550   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:58.162736   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:58.162899   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:58.163061   69942 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa Username:docker}
	I0319 20:58:58.247747   69942 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:58:58.253414   69942 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:58:58.253441   69942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:58:58.253508   69942 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:58:58.253601   69942 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:58:58.253715   69942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:58:58.265210   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:58:58.296668   69942 start.go:296] duration metric: took 137.138323ms for postStartSetup
	I0319 20:58:58.296744   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetConfigRaw
	I0319 20:58:58.297739   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetIP
	I0319 20:58:58.300588   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.301070   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.301105   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.301301   69942 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/config.json ...
	I0319 20:58:58.301535   69942 start.go:128] duration metric: took 30.773957939s to createHost
	I0319 20:58:58.301566   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:58.303799   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.304097   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.304122   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.304253   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:58.304434   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:58.304628   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:58.304792   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:58.304956   69942 main.go:141] libmachine: Using SSH client type: native
	I0319 20:58:58.305117   69942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0319 20:58:58.305127   69942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:58:58.414118   69942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710881938.357919331
	
	I0319 20:58:58.414141   69942 fix.go:216] guest clock: 1710881938.357919331
	I0319 20:58:58.414150   69942 fix.go:229] Guest: 2024-03-19 20:58:58.357919331 +0000 UTC Remote: 2024-03-19 20:58:58.301551163 +0000 UTC m=+30.916490786 (delta=56.368168ms)
	I0319 20:58:58.414176   69942 fix.go:200] guest clock delta is within tolerance: 56.368168ms
	I0319 20:58:58.414182   69942 start.go:83] releasing machines lock for "enable-default-cni-378078", held for 30.886677582s
	I0319 20:58:58.414207   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:58.414490   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetIP
	I0319 20:58:58.417192   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.417514   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.417539   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.417648   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:58.418231   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:58.418431   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .DriverName
	I0319 20:58:58.418526   69942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:58:58.418579   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:58.418643   69942 ssh_runner.go:195] Run: cat /version.json
	I0319 20:58:58.418664   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHHostname
	I0319 20:58:58.421589   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.421886   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.421910   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.421929   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.422053   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:58.422238   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:58.422322   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:58:58.422355   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:58:58.422422   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:58.422533   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHPort
	I0319 20:58:58.422618   69942 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa Username:docker}
	I0319 20:58:58.422990   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHKeyPath
	I0319 20:58:58.423150   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetSSHUsername
	I0319 20:58:58.423315   69942 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/enable-default-cni-378078/id_rsa Username:docker}
	I0319 20:58:58.535704   69942 ssh_runner.go:195] Run: systemctl --version
	I0319 20:58:58.542175   69942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:58:58.710427   69942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:58:58.717685   69942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:58:58.717751   69942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:58:58.735981   69942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:58:58.736000   69942 start.go:494] detecting cgroup driver to use...
	I0319 20:58:58.736061   69942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:58:58.754642   69942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:58:58.770461   69942 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:58:58.770514   69942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:58:58.786617   69942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:58:58.802829   69942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:58:58.928884   69942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:58:59.103938   69942 docker.go:233] disabling docker service ...
	I0319 20:58:59.104026   69942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:58:59.121190   69942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:58:59.135687   69942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:58:59.268604   69942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:58:59.393867   69942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:58:59.412223   69942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:58:59.437266   69942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:58:59.437335   69942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.450008   69942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:58:59.450076   69942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.465609   69942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.478501   69942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.490995   69942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:58:59.504269   69942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.516982   69942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.538260   69942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:58:59.550614   69942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:58:59.561655   69942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:58:59.561710   69942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:58:59.577067   69942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:58:59.589159   69942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:58:59.729801   69942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:58:59.888858   69942 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:58:59.888980   69942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:58:59.898386   69942 start.go:562] Will wait 60s for crictl version
	I0319 20:58:59.898498   69942 ssh_runner.go:195] Run: which crictl
	I0319 20:58:59.905565   69942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:58:59.949300   69942 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:58:59.949381   69942 ssh_runner.go:195] Run: crio --version
	I0319 20:58:59.986863   69942 ssh_runner.go:195] Run: crio --version
	I0319 20:59:00.025836   69942 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:59:00.027451   69942 main.go:141] libmachine: (enable-default-cni-378078) Calling .GetIP
	I0319 20:59:00.030665   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:59:00.031090   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:38:cf", ip: ""} in network mk-enable-default-cni-378078: {Iface:virbr2 ExpiryTime:2024-03-19 21:58:45 +0000 UTC Type:0 Mac:52:54:00:d5:38:cf Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:enable-default-cni-378078 Clientid:01:52:54:00:d5:38:cf}
	I0319 20:59:00.031158   69942 main.go:141] libmachine: (enable-default-cni-378078) DBG | domain enable-default-cni-378078 has defined IP address 192.168.50.168 and MAC address 52:54:00:d5:38:cf in network mk-enable-default-cni-378078
	I0319 20:59:00.031380   69942 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:59:00.036410   69942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:59:00.051466   69942 kubeadm.go:877] updating cluster {Name:enable-default-cni-378078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:enable-default-cni-378078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:59:00.051590   69942 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:59:00.051643   69942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:59:00.092745   69942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:59:00.092845   69942 ssh_runner.go:195] Run: which lz4
	I0319 20:59:00.097740   69942 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:59:00.103037   69942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:59:00.103066   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:59:01.894027   69942 crio.go:462] duration metric: took 1.796319915s to copy over tarball
	I0319 20:59:01.894123   69942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:58:59.147372   67880 pod_ready.go:102] pod "coredns-76f75df574-trnlk" in "kube-system" namespace has status "Ready":"False"
	I0319 20:59:01.647039   67880 pod_ready.go:102] pod "coredns-76f75df574-trnlk" in "kube-system" namespace has status "Ready":"False"
	I0319 20:59:03.648022   67880 pod_ready.go:102] pod "coredns-76f75df574-trnlk" in "kube-system" namespace has status "Ready":"False"
	I0319 20:59:04.861608   69942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.967453636s)
	I0319 20:59:04.861646   69942 crio.go:469] duration metric: took 2.967590853s to extract the tarball
	I0319 20:59:04.861652   69942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:59:04.903329   69942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:59:04.957566   69942 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:59:04.957594   69942 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:59:04.957604   69942 kubeadm.go:928] updating node { 192.168.50.168 8443 v1.29.3 crio true true} ...
	I0319 20:59:04.957725   69942 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-378078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-378078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0319 20:59:04.957787   69942 ssh_runner.go:195] Run: crio config
	I0319 20:59:05.010523   69942 cni.go:84] Creating CNI manager for "bridge"
	I0319 20:59:05.010549   69942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:59:05.010573   69942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.168 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-378078 NodeName:enable-default-cni-378078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:59:05.010716   69942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-378078"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:59:05.010783   69942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:59:05.022152   69942 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:59:05.022214   69942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:59:05.034300   69942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0319 20:59:05.053722   69942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:59:05.072707   69942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:59:05.092726   69942 ssh_runner.go:195] Run: grep 192.168.50.168	control-plane.minikube.internal$ /etc/hosts
	I0319 20:59:05.098047   69942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:59:05.111627   69942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:59:05.234511   69942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:59:05.256821   69942 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078 for IP: 192.168.50.168
	I0319 20:59:05.256855   69942 certs.go:194] generating shared ca certs ...
	I0319 20:59:05.256884   69942 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.257053   69942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:59:05.257113   69942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:59:05.257127   69942 certs.go:256] generating profile certs ...
	I0319 20:59:05.257200   69942 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/client.key
	I0319 20:59:05.257213   69942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/client.crt with IP's: []
	I0319 20:59:05.373398   69942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/client.crt ...
	I0319 20:59:05.373427   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/client.crt: {Name:mk4ea8da3ced31d25d30eb00893ea6378834bfab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.373634   69942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/client.key ...
	I0319 20:59:05.373651   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/client.key: {Name:mk49322ac9a258b6c483fa31a19810c793b2e78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.373749   69942 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.key.a233e21a
	I0319 20:59:05.373765   69942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.crt.a233e21a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.168]
	I0319 20:59:05.635795   69942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.crt.a233e21a ...
	I0319 20:59:05.635822   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.crt.a233e21a: {Name:mk8a079af30e197ef2f1e7625fe26a183dfa11bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.635966   69942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.key.a233e21a ...
	I0319 20:59:05.635979   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.key.a233e21a: {Name:mk7def7b906947db3880e9044667470d90e55391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.636049   69942 certs.go:381] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.crt.a233e21a -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.crt
	I0319 20:59:05.636147   69942 certs.go:385] copying /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.key.a233e21a -> /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.key
	I0319 20:59:05.636254   69942 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.key
	I0319 20:59:05.636290   69942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.crt with IP's: []
	I0319 20:59:05.686465   69942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.crt ...
	I0319 20:59:05.686491   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.crt: {Name:mkf1a5ef7d0188580fccdd6702767481baf8cc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.686634   69942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.key ...
	I0319 20:59:05.686648   69942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.key: {Name:mk1d492f83adece4798b6cb6ce0a13f9b82c5102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:59:05.686803   69942 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:59:05.686836   69942 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:59:05.686845   69942 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:59:05.686868   69942 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:59:05.686892   69942 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:59:05.686914   69942 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:59:05.686955   69942 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:59:05.687558   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:59:05.718316   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:59:05.751717   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:59:05.779216   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:59:05.809959   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0319 20:59:05.840656   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:59:05.871939   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:59:05.903216   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/enable-default-cni-378078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 20:59:05.931150   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:59:05.962286   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:59:05.989838   69942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:59:06.016030   69942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:59:06.036403   69942 ssh_runner.go:195] Run: openssl version
	I0319 20:59:06.044397   69942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:59:06.058098   69942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:59:06.063371   69942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:59:06.063443   69942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:59:06.070301   69942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:59:06.083023   69942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:59:06.096510   69942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:59:06.101985   69942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:59:06.102038   69942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:59:06.108906   69942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:59:06.123037   69942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:59:06.137205   69942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:59:06.143796   69942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:59:06.143872   69942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:59:06.151218   69942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:59:06.163969   69942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:59:06.169079   69942 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 20:59:06.169135   69942 kubeadm.go:391] StartCluster: {Name:enable-default-cni-378078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:enable-default-cni-378078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:59:06.169223   69942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:59:06.169279   69942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:59:06.216621   69942 cri.go:89] found id: ""
	I0319 20:59:06.216698   69942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 20:59:06.228133   69942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:59:06.239725   69942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:59:06.251673   69942 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:59:06.251689   69942 kubeadm.go:156] found existing configuration files:
	
	I0319 20:59:06.251752   69942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:59:06.261840   69942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:59:06.261907   69942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:59:06.272173   69942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:59:06.282527   69942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:59:06.282571   69942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:59:06.293432   69942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:59:06.303824   69942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:59:06.303885   69942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:59:06.314002   69942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:59:06.323814   69942 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:59:06.323867   69942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:59:06.334021   69942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:59:06.535937   69942 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:59:06.147474   67880 pod_ready.go:102] pod "coredns-76f75df574-trnlk" in "kube-system" namespace has status "Ready":"False"
	I0319 20:59:08.646544   67880 pod_ready.go:92] pod "coredns-76f75df574-trnlk" in "kube-system" namespace has status "Ready":"True"
	I0319 20:59:08.646571   67880 pod_ready.go:81] duration metric: took 14.007839521s for pod "coredns-76f75df574-trnlk" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.646584   67880 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.651450   67880 pod_ready.go:92] pod "etcd-custom-flannel-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:59:08.651470   67880 pod_ready.go:81] duration metric: took 4.878827ms for pod "etcd-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.651479   67880 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.656609   67880 pod_ready.go:92] pod "kube-apiserver-custom-flannel-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:59:08.656630   67880 pod_ready.go:81] duration metric: took 5.144072ms for pod "kube-apiserver-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.656641   67880 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.663532   67880 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:59:08.663550   67880 pod_ready.go:81] duration metric: took 6.900803ms for pod "kube-controller-manager-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.663561   67880 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-qgph6" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.670738   67880 pod_ready.go:92] pod "kube-proxy-qgph6" in "kube-system" namespace has status "Ready":"True"
	I0319 20:59:08.670759   67880 pod_ready.go:81] duration metric: took 7.190815ms for pod "kube-proxy-qgph6" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:08.670770   67880 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:09.044024   67880 pod_ready.go:92] pod "kube-scheduler-custom-flannel-378078" in "kube-system" namespace has status "Ready":"True"
	I0319 20:59:09.044054   67880 pod_ready.go:81] duration metric: took 373.275436ms for pod "kube-scheduler-custom-flannel-378078" in "kube-system" namespace to be "Ready" ...
	I0319 20:59:09.044068   67880 pod_ready.go:38] duration metric: took 14.420653488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:59:09.044084   67880 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:59:09.044144   67880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:59:09.064152   67880 api_server.go:72] duration metric: took 24.341174996s to wait for apiserver process to appear ...
	I0319 20:59:09.064178   67880 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:59:09.064198   67880 api_server.go:253] Checking apiserver healthz at https://192.168.72.179:8443/healthz ...
	I0319 20:59:09.070352   67880 api_server.go:279] https://192.168.72.179:8443/healthz returned 200:
	ok
	I0319 20:59:09.071360   67880 api_server.go:141] control plane version: v1.29.3
	I0319 20:59:09.071387   67880 api_server.go:131] duration metric: took 7.203234ms to wait for apiserver health ...
	I0319 20:59:09.071395   67880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:59:09.247715   67880 system_pods.go:59] 7 kube-system pods found
	I0319 20:59:09.247742   67880 system_pods.go:61] "coredns-76f75df574-trnlk" [eabd92fd-621f-48d4-bb0f-d1a66b089f81] Running
	I0319 20:59:09.247746   67880 system_pods.go:61] "etcd-custom-flannel-378078" [67db873f-3eca-436f-8521-eaf14fe6ca4b] Running
	I0319 20:59:09.247750   67880 system_pods.go:61] "kube-apiserver-custom-flannel-378078" [a969be19-6b0e-4a50-8e4a-916c0f1a6255] Running
	I0319 20:59:09.247754   67880 system_pods.go:61] "kube-controller-manager-custom-flannel-378078" [7150c487-33e4-49b0-9fb7-b58d420527e8] Running
	I0319 20:59:09.247757   67880 system_pods.go:61] "kube-proxy-qgph6" [76c69e88-b170-4ef5-bce7-5c6db4dde20c] Running
	I0319 20:59:09.247760   67880 system_pods.go:61] "kube-scheduler-custom-flannel-378078" [f4209409-f9b0-49b2-97ce-d719d0f18ec4] Running
	I0319 20:59:09.247763   67880 system_pods.go:61] "storage-provisioner" [7d603681-6979-49e4-bc19-5ef1914ebfb8] Running
	I0319 20:59:09.247769   67880 system_pods.go:74] duration metric: took 176.368761ms to wait for pod list to return data ...
	I0319 20:59:09.247776   67880 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:59:09.445096   67880 default_sa.go:45] found service account: "default"
	I0319 20:59:09.445119   67880 default_sa.go:55] duration metric: took 197.337001ms for default service account to be created ...
	I0319 20:59:09.445128   67880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:59:09.647103   67880 system_pods.go:86] 7 kube-system pods found
	I0319 20:59:09.647137   67880 system_pods.go:89] "coredns-76f75df574-trnlk" [eabd92fd-621f-48d4-bb0f-d1a66b089f81] Running
	I0319 20:59:09.647147   67880 system_pods.go:89] "etcd-custom-flannel-378078" [67db873f-3eca-436f-8521-eaf14fe6ca4b] Running
	I0319 20:59:09.647154   67880 system_pods.go:89] "kube-apiserver-custom-flannel-378078" [a969be19-6b0e-4a50-8e4a-916c0f1a6255] Running
	I0319 20:59:09.647161   67880 system_pods.go:89] "kube-controller-manager-custom-flannel-378078" [7150c487-33e4-49b0-9fb7-b58d420527e8] Running
	I0319 20:59:09.647166   67880 system_pods.go:89] "kube-proxy-qgph6" [76c69e88-b170-4ef5-bce7-5c6db4dde20c] Running
	I0319 20:59:09.647173   67880 system_pods.go:89] "kube-scheduler-custom-flannel-378078" [f4209409-f9b0-49b2-97ce-d719d0f18ec4] Running
	I0319 20:59:09.647179   67880 system_pods.go:89] "storage-provisioner" [7d603681-6979-49e4-bc19-5ef1914ebfb8] Running
	I0319 20:59:09.647189   67880 system_pods.go:126] duration metric: took 202.05479ms to wait for k8s-apps to be running ...
	I0319 20:59:09.647201   67880 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:59:09.647257   67880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:59:09.668315   67880 system_svc.go:56] duration metric: took 21.104133ms WaitForService to wait for kubelet
	I0319 20:59:09.668347   67880 kubeadm.go:576] duration metric: took 24.945374115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:59:09.668373   67880 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:59:09.844744   67880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:59:09.844770   67880 node_conditions.go:123] node cpu capacity is 2
	I0319 20:59:09.844782   67880 node_conditions.go:105] duration metric: took 176.404002ms to run NodePressure ...
	I0319 20:59:09.844792   67880 start.go:240] waiting for startup goroutines ...
	I0319 20:59:09.844799   67880 start.go:245] waiting for cluster config update ...
	I0319 20:59:09.844808   67880 start.go:254] writing updated cluster config ...
	I0319 20:59:09.845084   67880 ssh_runner.go:195] Run: rm -f paused
	I0319 20:59:09.912165   67880 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:59:09.914121   67880 out.go:177] * Done! kubectl is now configured to use "custom-flannel-378078" cluster and "default" namespace by default
	I0319 20:59:18.545586   69942 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:59:18.545678   69942 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:59:18.545776   69942 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:59:18.545899   69942 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:59:18.546022   69942 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:59:18.546106   69942 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:59:18.547872   69942 out.go:204]   - Generating certificates and keys ...
	I0319 20:59:18.547976   69942 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:59:18.548068   69942 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:59:18.548158   69942 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 20:59:18.548236   69942 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0319 20:59:18.548331   69942 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0319 20:59:18.548411   69942 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0319 20:59:18.548486   69942 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0319 20:59:18.548648   69942 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-378078 localhost] and IPs [192.168.50.168 127.0.0.1 ::1]
	I0319 20:59:18.548723   69942 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0319 20:59:18.548832   69942 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-378078 localhost] and IPs [192.168.50.168 127.0.0.1 ::1]
	I0319 20:59:18.548883   69942 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 20:59:18.548933   69942 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 20:59:18.548968   69942 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0319 20:59:18.549011   69942 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:59:18.549051   69942 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:59:18.549095   69942 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:59:18.549136   69942 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:59:18.549185   69942 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:59:18.549230   69942 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:59:18.549327   69942 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:59:18.549411   69942 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:59:18.550865   69942 out.go:204]   - Booting up control plane ...
	I0319 20:59:18.551004   69942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:59:18.551099   69942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:59:18.551183   69942 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:59:18.551327   69942 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:59:18.551458   69942 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:59:18.551516   69942 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:59:18.551756   69942 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:59:18.551878   69942 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.002489 seconds
	I0319 20:59:18.552046   69942 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:59:18.552234   69942 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:59:18.552328   69942 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:59:18.552579   69942 kubeadm.go:309] [mark-control-plane] Marking the node enable-default-cni-378078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:59:18.552682   69942 kubeadm.go:309] [bootstrap-token] Using token: y1t8hp.4nipts4gggm07a2x
	I0319 20:59:18.554077   69942 out.go:204]   - Configuring RBAC rules ...
	I0319 20:59:18.554221   69942 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:59:18.554340   69942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:59:18.554538   69942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:59:18.554733   69942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:59:18.554882   69942 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:59:18.555003   69942 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:59:18.555098   69942 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:59:18.555140   69942 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:59:18.555189   69942 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:59:18.555193   69942 kubeadm.go:309] 
	I0319 20:59:18.555255   69942 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:59:18.555263   69942 kubeadm.go:309] 
	I0319 20:59:18.555349   69942 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:59:18.555360   69942 kubeadm.go:309] 
	I0319 20:59:18.555397   69942 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:59:18.555467   69942 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:59:18.555555   69942 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:59:18.555566   69942 kubeadm.go:309] 
	I0319 20:59:18.555637   69942 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:59:18.555647   69942 kubeadm.go:309] 
	I0319 20:59:18.555730   69942 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:59:18.555746   69942 kubeadm.go:309] 
	I0319 20:59:18.555820   69942 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:59:18.555931   69942 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:59:18.556019   69942 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:59:18.556027   69942 kubeadm.go:309] 
	I0319 20:59:18.556133   69942 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:59:18.556277   69942 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:59:18.556297   69942 kubeadm.go:309] 
	I0319 20:59:18.556440   69942 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token y1t8hp.4nipts4gggm07a2x \
	I0319 20:59:18.556583   69942 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:59:18.556613   69942 kubeadm.go:309] 	--control-plane 
	I0319 20:59:18.556631   69942 kubeadm.go:309] 
	I0319 20:59:18.556749   69942 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:59:18.556762   69942 kubeadm.go:309] 
	I0319 20:59:18.556889   69942 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token y1t8hp.4nipts4gggm07a2x \
	I0319 20:59:18.557029   69942 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:59:18.557040   69942 cni.go:84] Creating CNI manager for "bridge"
	I0319 20:59:18.558717   69942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.495628593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce57a0fb-6bbe-45e7-8d7a-80b57f80e4a4 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.497142659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80f03fb5-53f9-4033-8b6b-8818d5201114 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.497826915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881962497793082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80f03fb5-53f9-4033-8b6b-8818d5201114 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.498883290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f430f37d-ceda-4a1d-afeb-2bd2c59dc94b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.498990931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f430f37d-ceda-4a1d-afeb-2bd2c59dc94b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.499276941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f430f37d-ceda-4a1d-afeb-2bd2c59dc94b name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.573257771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75da514d-af69-4e96-a94f-2c3dbc2e1879 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.573376028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75da514d-af69-4e96-a94f-2c3dbc2e1879 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.575021783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36c16a03-e3e2-4280-949d-9ad8578a5dbb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.575554610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881962575530349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36c16a03-e3e2-4280-949d-9ad8578a5dbb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.576085995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfa1f591-aab7-4a05-b382-40fd89ad3531 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.576138774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfa1f591-aab7-4a05-b382-40fd89ad3531 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.576343300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfa1f591-aab7-4a05-b382-40fd89ad3531 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.623065808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45ced156-c9a6-4d9f-8971-b46a985fb9c9 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.623181045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45ced156-c9a6-4d9f-8971-b46a985fb9c9 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.624270051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6f107f1-52da-461c-aeba-ddfe6099db13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.624914275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881962624886160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6f107f1-52da-461c-aeba-ddfe6099db13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.625983260Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5a2a65ba-247c-404b-8ae9-625e4f4453e0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.626615103Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f3a8a7e59db94972442ee40fbf8fc5435a3f8120258b062b9eca120563969c86,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-nv288,Uid:17b4b56d-bbde-4dbf-8441-bbaee4f8ded5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880881037898903,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-nv288,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b4b56d-bbde-4dbf-8441-bbaee4f8ded5,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:20.725253057Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b314e502-0cf6-497c-9129-8eae
14086712,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880880889324469,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-19T20:41:20.576893435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-swxdt,Uid:3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880879180581666,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:18.857228596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-4rq6h,Uid:97f3ed0d
-0300-4f53-bead-79ccbd6d17c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880879127093824,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:18.818679911Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&PodSandboxMetadata{Name:kube-proxy-j7ghm,Uid:95092d52-b83c-4c36-81b2-cd3875cf0724,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880878965217998,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-19T20:41:18.622717078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-385240,Uid:30c11a31d00f7353e1143eba8278408c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880859778323272,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30c11a31d00f7353e1143eba8278408c,kubernetes.io/config.seen: 2024-03-19T20:40:59.279262656Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9
b5ef1c1a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-385240,Uid:74ca5cfa72d52792cf077b856e0650e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880859770840987,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 74ca5cfa72d52792cf077b856e0650e0,kubernetes.io/config.seen: 2024-03-19T20:40:59.279263930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-385240,Uid:a2c2b59d1dfde18af1618e81f9f14597,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710880859763036719,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.77:8444,kubernetes.io/config.hash: a2c2b59d1dfde18af1618e81f9f14597,kubernetes.io/config.seen: 2024-03-19T20:40:59.279261426Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-385240,Uid:c1806b2a7bb310c1910f3d5423cf2aa0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710880859752308168,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,tier: control-plane,},Annotati
ons:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.77:2379,kubernetes.io/config.hash: c1806b2a7bb310c1910f3d5423cf2aa0,kubernetes.io/config.seen: 2024-03-19T20:40:59.279257524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-385240,Uid:a2c2b59d1dfde18af1618e81f9f14597,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710880566057147687,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.77:8444,kubernetes.io/config.hash: a2c2b59d1dfde18af1618e81f9f14597,kubernetes.io/config.seen
: 2024-03-19T20:36:05.559171071Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5a2a65ba-247c-404b-8ae9-625e4f4453e0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.627777243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a63ab994-0dfc-447d-bef7-590b3e0e0f0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.627860465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a63ab994-0dfc-447d-bef7-590b3e0e0f0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.628123951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a63ab994-0dfc-447d-bef7-590b3e0e0f0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.628933943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70cecd4d-ffaa-49d9-a6b2-bdf25e1f4541 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.629002377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70cecd4d-ffaa-49d9-a6b2-bdf25e1f4541 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:59:22 default-k8s-diff-port-385240 crio[693]: time="2024-03-19 20:59:22.629265264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b,PodSandboxId:6a1d349cb1723140fcc4d88efe128ba297e25045493db2862e79512266c785bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880880991367389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b314e502-0cf6-497c-9129-8eae14086712,},Annotations:map[string]string{io.kubernetes.container.hash: 730d438,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c,PodSandboxId:94c90ce3b554f92318be94c75f41d8bd9da33cf3ace0dbb4ecb876bd9bdbc496,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879728636860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-swxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4,},Annotations:map[string]string{io.kubernetes.container.hash: 9db6c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c,PodSandboxId:468ab7d556f732bb182a71cac2d6ad1cd5301cd9bd3f2716831528ba87b483f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880879613519228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4rq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 97f3ed0d-0300-4f53-bead-79ccbd6d17c0,},Annotations:map[string]string{io.kubernetes.container.hash: 3972ee8c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827,PodSandboxId:a3653b80a5bd4d91555ab16328d885566ff9893cbaf0f47d4f3029a02dddb1be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,
CreatedAt:1710880879237948806,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7ghm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95092d52-b83c-4c36-81b2-cd3875cf0724,},Annotations:map[string]string{io.kubernetes.container.hash: c9aad589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c,PodSandboxId:bf5b86b99d65a3419fa9534ba76e5ab9f9c77fb648d38bbd01ca32a9b5ef1c1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1710880860069520167
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ca5cfa72d52792cf077b856e0650e0,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee,PodSandboxId:30a50029292e91d097baeab12ffa0681e8f5c5f6b906dd4749ca5b36966e745c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1710880860008854430,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf,PodSandboxId:dcaf72d4992d52f480aa64d53c44c9279c0457a085f06e2bfaa0763d79a7565a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17108808599869
54998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30c11a31d00f7353e1143eba8278408c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82,PodSandboxId:b7c7cba2a4b8e6d4873aca9d9700eb81f16df5c9fd699f9c44000a05f87b356d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17108808
59924163550,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1806b2a7bb310c1910f3d5423cf2aa0,},Annotations:map[string]string{io.kubernetes.container.hash: a8e824b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946,PodSandboxId:6ab2a9e728b419c7b199e839dcc6ae41114736720956413936ba90b678f3f589,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1710880566420231256,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-385240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2c2b59d1dfde18af1618e81f9f14597,},Annotations:map[string]string{io.kubernetes.container.hash: b4834990,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70cecd4d-ffaa-49d9-a6b2-bdf25e1f4541 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e5edce9fd30e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   6a1d349cb1723       storage-provisioner
	8e9bbe7a0b88a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   94c90ce3b554f       coredns-76f75df574-swxdt
	373088355ffbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   468ab7d556f73       coredns-76f75df574-4rq6h
	65a6211bab4fa       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   18 minutes ago      Running             kube-proxy                0                   a3653b80a5bd4       kube-proxy-j7ghm
	0ec51f453399c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   18 minutes ago      Running             kube-scheduler            2                   bf5b86b99d65a       kube-scheduler-default-k8s-diff-port-385240
	213fcde428339       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   18 minutes ago      Running             kube-apiserver            2                   30a50029292e9       kube-apiserver-default-k8s-diff-port-385240
	21a093811a77e       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   18 minutes ago      Running             kube-controller-manager   2                   dcaf72d4992d5       kube-controller-manager-default-k8s-diff-port-385240
	ba041437b7854       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Running             etcd                      2                   b7c7cba2a4b8e       etcd-default-k8s-diff-port-385240
	28d1f1e818e44       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   23 minutes ago      Exited              kube-apiserver            1                   6ab2a9e728b41       kube-apiserver-default-k8s-diff-port-385240
	
	
	==> coredns [373088355ffbb9fbba19964cdce8bb7424a30b56b29feec01e17618001cb710c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8e9bbe7a0b88a6195fa430f5a66c68d7c344e141bcd0c294756cd3a80dcfbd9c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-385240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-385240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=default-k8s-diff-port-385240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:41:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-385240
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:59:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:56:45 +0000   Tue, 19 Mar 2024 20:41:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:56:45 +0000   Tue, 19 Mar 2024 20:41:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:56:45 +0000   Tue, 19 Mar 2024 20:41:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:56:45 +0000   Tue, 19 Mar 2024 20:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    default-k8s-diff-port-385240
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75be40579a0849b998edf347aba225d2
	  System UUID:                75be4057-9a08-49b9-98ed-f347aba225d2
	  Boot ID:                    9233f891-93d8-4a92-9088-940e41dc6547
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4rq6h                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-76f75df574-swxdt                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-385240                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-385240             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-385240    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-j7ghm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-385240             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-nv288                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node default-k8s-diff-port-385240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node default-k8s-diff-port-385240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node default-k8s-diff-port-385240 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node default-k8s-diff-port-385240 event: Registered Node default-k8s-diff-port-385240 in Controller
	
	
	==> dmesg <==
	[  +0.046621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.910546] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.516450] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681047] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.494753] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.057631] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075790] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.220764] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.157207] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.352029] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[Mar19 20:36] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.067842] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.327593] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +4.631759] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.500003] kauditd_printk_skb: 69 callbacks suppressed
	[Mar19 20:40] systemd-fstab-generator[3398]: Ignoring "noauto" option for root device
	[  +0.075881] kauditd_printk_skb: 7 callbacks suppressed
	[Mar19 20:41] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.074775] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.282761] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[  +0.128016] kauditd_printk_skb: 12 callbacks suppressed
	[Mar19 20:42] kauditd_printk_skb: 80 callbacks suppressed
	[Mar19 20:58] hrtimer: interrupt took 4156349 ns
	
	
	==> etcd [ba041437b785408119a53f944789fa2be67b71daddcec3bb9bb6bbc86360cd82] <==
	{"level":"warn","ts":"2024-03-19T20:56:32.163526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"627.88797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:56:32.163664Z","caller":"traceutil/trace.go:171","msg":"trace[294819091] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1186; }","duration":"628.188572ms","start":"2024-03-19T20:56:31.535462Z","end":"2024-03-19T20:56:32.16365Z","steps":["trace[294819091] 'agreement among raft nodes before linearized reading'  (duration: 627.868529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:56:32.163742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:56:31.535449Z","time spent":"628.27661ms","remote":"127.0.0.1:44836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-03-19T20:56:32.163537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"487.686079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:56:32.163982Z","caller":"traceutil/trace.go:171","msg":"trace[1276856528] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1186; }","duration":"488.142632ms","start":"2024-03-19T20:56:31.675825Z","end":"2024-03-19T20:56:32.163968Z","steps":["trace[1276856528] 'agreement among raft nodes before linearized reading'  (duration: 487.684707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:56:32.164061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:56:31.675812Z","time spent":"488.232621ms","remote":"127.0.0.1:44664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-19T20:56:32.163581Z","caller":"traceutil/trace.go:171","msg":"trace[2011911398] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"654.285166ms","start":"2024-03-19T20:56:31.509283Z","end":"2024-03-19T20:56:32.163568Z","steps":["trace[2011911398] 'process raft request'  (duration: 653.64166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:56:32.16436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-19T20:56:31.509269Z","time spent":"654.994205ms","remote":"127.0.0.1:44822","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1185 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-19T20:56:32.382115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.737873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:56:32.382693Z","caller":"traceutil/trace.go:171","msg":"trace[751479680] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1186; }","duration":"197.348138ms","start":"2024-03-19T20:56:32.185327Z","end":"2024-03-19T20:56:32.382675Z","steps":["trace[751479680] 'count revisions from in-memory index tree'  (duration: 196.582497ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:56:33.132709Z","caller":"traceutil/trace.go:171","msg":"trace[380720647] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"102.178492ms","start":"2024-03-19T20:56:33.030509Z","end":"2024-03-19T20:56:33.132688Z","steps":["trace[380720647] 'process raft request'  (duration: 86.268183ms)","trace[380720647] 'compare'  (duration: 15.776568ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:57:19.644117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.477389ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14002410701971804982 > lease_revoke:<id:42528e58722852ed>","response":"size:28"}
	{"level":"info","ts":"2024-03-19T20:57:19.644747Z","caller":"traceutil/trace.go:171","msg":"trace[1284201716] linearizableReadLoop","detail":"{readStateIndex:1438; appliedIndex:1437; }","duration":"110.232123ms","start":"2024-03-19T20:57:19.534493Z","end":"2024-03-19T20:57:19.644725Z","steps":["trace[1284201716] 'read index received'  (duration: 35.593µs)","trace[1284201716] 'applied index is now lower than readState.Index'  (duration: 110.195334ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:57:19.644896Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.378917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:57:19.644938Z","caller":"traceutil/trace.go:171","msg":"trace[56551672] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1226; }","duration":"110.443524ms","start":"2024-03-19T20:57:19.534486Z","end":"2024-03-19T20:57:19.64493Z","steps":["trace[56551672] 'agreement among raft nodes before linearized reading'  (duration: 110.330355ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:57:46.788134Z","caller":"traceutil/trace.go:171","msg":"trace[2045519805] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"139.595339ms","start":"2024-03-19T20:57:46.648514Z","end":"2024-03-19T20:57:46.78811Z","steps":["trace[2045519805] 'process raft request'  (duration: 139.471638ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:58:14.642614Z","caller":"traceutil/trace.go:171","msg":"trace[1482722141] linearizableReadLoop","detail":"{readStateIndex:1493; appliedIndex:1492; }","duration":"110.734671ms","start":"2024-03-19T20:58:14.531857Z","end":"2024-03-19T20:58:14.642591Z","steps":["trace[1482722141] 'read index received'  (duration: 110.453567ms)","trace[1482722141] 'applied index is now lower than readState.Index'  (duration: 279.825µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-19T20:58:14.642853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.955495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-19T20:58:14.642896Z","caller":"traceutil/trace.go:171","msg":"trace[196937796] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1270; }","duration":"111.050355ms","start":"2024-03-19T20:58:14.531833Z","end":"2024-03-19T20:58:14.642883Z","steps":["trace[196937796] 'agreement among raft nodes before linearized reading'  (duration: 110.933826ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:58:15.117317Z","caller":"traceutil/trace.go:171","msg":"trace[500432655] transaction","detail":"{read_only:false; response_revision:1272; number_of_response:1; }","duration":"151.622328ms","start":"2024-03-19T20:58:14.965673Z","end":"2024-03-19T20:58:15.117296Z","steps":["trace[500432655] 'process raft request'  (duration: 151.444016ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:58:16.196232Z","caller":"traceutil/trace.go:171","msg":"trace[1984190283] transaction","detail":"{read_only:false; response_revision:1273; number_of_response:1; }","duration":"123.617578ms","start":"2024-03-19T20:58:16.072586Z","end":"2024-03-19T20:58:16.196204Z","steps":["trace[1984190283] 'process raft request'  (duration: 123.25115ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:58:16.482561Z","caller":"traceutil/trace.go:171","msg":"trace[542991558] transaction","detail":"{read_only:false; response_revision:1274; number_of_response:1; }","duration":"109.522139ms","start":"2024-03-19T20:58:16.373017Z","end":"2024-03-19T20:58:16.482539Z","steps":["trace[542991558] 'process raft request'  (duration: 109.312519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-19T20:58:16.746071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.82687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-19T20:58:16.746312Z","caller":"traceutil/trace.go:171","msg":"trace[1587746115] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1274; }","duration":"181.115153ms","start":"2024-03-19T20:58:16.565181Z","end":"2024-03-19T20:58:16.746296Z","steps":["trace[1587746115] 'count revisions from in-memory index tree'  (duration: 180.766149ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-19T20:58:33.441732Z","caller":"traceutil/trace.go:171","msg":"trace[490930198] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"188.478027ms","start":"2024-03-19T20:58:33.253224Z","end":"2024-03-19T20:58:33.441702Z","steps":["trace[490930198] 'process raft request'  (duration: 188.337447ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:59:23 up 23 min,  0 users,  load average: 0.73, 0.30, 0.22
	Linux default-k8s-diff-port-385240 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [213fcde428339d494a7e039d4238b425a35fc19f11069500bfc11ee100b1c6ee] <==
	W0319 20:56:03.678723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:56:03.678837       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:56:03.679597       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0319 20:56:32.165373       1 trace.go:236] Trace[2025581409]: "List" accept:application/json, */*,audit-id:7e0cc3b2-d88e-473f-9ad4-88b220b1cf72,client:192.168.39.1,api-group:,api-version:v1,name:,subresource:,namespace:kubernetes-dashboard,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (19-Mar-2024 20:56:31.534) (total time: 630ms):
	Trace[2025581409]: ["List(recursive=true) etcd3" audit-id:7e0cc3b2-d88e-473f-9ad4-88b220b1cf72,key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: 630ms (20:56:31.535)]
	Trace[2025581409]: [630.342599ms] [630.342599ms] END
	I0319 20:56:32.166010       1 trace.go:236] Trace[538611041]: "Update" accept:application/json, */*,audit-id:6d8662de-a178-4a81-8462-e95eb3f92f21,client:192.168.39.77,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (19-Mar-2024 20:56:31.507) (total time: 658ms):
	Trace[538611041]: ["GuaranteedUpdate etcd3" audit-id:6d8662de-a178-4a81-8462-e95eb3f92f21,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 657ms (20:56:31.508)
	Trace[538611041]:  ---"Txn call completed" 657ms (20:56:32.165)]
	Trace[538611041]: [658.163091ms] [658.163091ms] END
	W0319 20:57:03.678856       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:57:03.678931       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:57:03.678941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:57:03.680083       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:57:03.681320       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:57:03.681356       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:59:03.679127       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:59:03.679244       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:59:03.679258       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:59:03.681889       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:59:03.682017       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:59:03.682029       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [28d1f1e818e44bcf6cbfdafdf23e82029df033f6ffc1e65e61a599d04e3e2946] <==
	W0319 20:40:52.995687       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.070035       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.169092       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.295652       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.348769       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.359805       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.394784       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.429823       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.435503       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.480264       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.484996       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.542805       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.546747       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.571709       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.646687       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.663537       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.668024       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:53.964273       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.007596       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.155302       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.441699       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.569152       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.786384       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:54.886057       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0319 20:40:55.052270       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [21a093811a77e70a4b20e19c9af3b234acb5cccb4c3a8b4419db27cf5b10bfaf] <==
	I0319 20:53:48.534485       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:54:17.973475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:54:18.544062       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:54:47.983264       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:54:48.558263       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:55:17.991351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:55:18.567084       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:55:47.998330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:55:48.577676       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:56:18.005535       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:56:18.586289       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:56:48.013026       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:56:48.597560       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:57:18.020506       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:57:18.611656       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:57:38.988118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="331.451µs"
	E0319 20:57:48.026261       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:57:48.622351       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:57:49.995824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="264.944µs"
	E0319 20:58:18.034238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:58:18.639865       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:58:48.041460       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:58:48.649626       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:59:18.048746       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:59:18.664330       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [65a6211bab4fa92b108d1aafb0b58c3dbac02954d42150d3efe2b41225cb8827] <==
	I0319 20:41:19.692516       1 server_others.go:72] "Using iptables proxy"
	I0319 20:41:19.726024       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	I0319 20:41:19.916370       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0319 20:41:19.916439       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:41:19.916498       1 server_others.go:168] "Using iptables Proxier"
	I0319 20:41:19.923760       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:41:19.924272       1 server.go:865] "Version info" version="v1.29.3"
	I0319 20:41:19.924951       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:41:19.926009       1 config.go:188] "Starting service config controller"
	I0319 20:41:19.926030       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0319 20:41:19.926052       1 config.go:97] "Starting endpoint slice config controller"
	I0319 20:41:19.926056       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0319 20:41:19.926769       1 config.go:315] "Starting node config controller"
	I0319 20:41:19.926779       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0319 20:41:20.027569       1 shared_informer.go:318] Caches are synced for node config
	I0319 20:41:20.027643       1 shared_informer.go:318] Caches are synced for service config
	I0319 20:41:20.027689       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0ec51f453399cbafd56d4714d9418f9dfb983cd1e2e983150ca580b5a09d8b3c] <==
	E0319 20:41:02.745729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 20:41:02.745736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 20:41:02.745747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 20:41:02.745754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:02.745965       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:41:02.746126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:41:02.755963       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:41:02.756056       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0319 20:41:03.579139       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:41:03.579245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:41:03.591023       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:41:03.591163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:41:03.629227       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:41:03.629293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:41:03.673318       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:03.673487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:03.791689       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 20:41:03.791746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:03.948558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0319 20:41:03.948659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0319 20:41:03.982355       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:41:03.982513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0319 20:41:04.182325       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:41:04.182386       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 20:41:06.092564       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:57:06 default-k8s-diff-port-385240 kubelet[3723]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:57:10 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:10.969174    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:57:24 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:24.985265    3723 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 19 20:57:24 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:24.985755    3723 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 19 20:57:24 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:24.986045    3723 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b5mtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-nv288_kube-system(17b4b56d-bbde-4dbf-8441-bbaee4f8ded5): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 19 20:57:24 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:24.986569    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:57:38 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:38.969233    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:57:49 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:57:49.970216    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:58:03 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:58:03.970293    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:58:06 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:58:06.027607    3723 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:58:06 default-k8s-diff-port-385240 kubelet[3723]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:58:06 default-k8s-diff-port-385240 kubelet[3723]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:58:06 default-k8s-diff-port-385240 kubelet[3723]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:58:06 default-k8s-diff-port-385240 kubelet[3723]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:58:14 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:58:14.970122    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:58:28 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:58:28.970907    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:58:41 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:58:41.969782    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:58:53 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:58:53.969347    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:59:06 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:59:06.028129    3723 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 19 20:59:06 default-k8s-diff-port-385240 kubelet[3723]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:59:06 default-k8s-diff-port-385240 kubelet[3723]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:59:06 default-k8s-diff-port-385240 kubelet[3723]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:59:06 default-k8s-diff-port-385240 kubelet[3723]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:59:06 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:59:06.969381    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	Mar 19 20:59:17 default-k8s-diff-port-385240 kubelet[3723]: E0319 20:59:17.970629    3723 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv288" podUID="17b4b56d-bbde-4dbf-8441-bbaee4f8ded5"
	
	
	==> storage-provisioner [e5edce9fd30e2ea3d276b274ca622e3c0fe6a608da8a62f2fab15bb28052de3b] <==
	I0319 20:41:21.105613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:41:21.128050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:41:21.128241       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:41:21.154159       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:41:21.154303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-385240_acf47112-9fa7-4021-9c0f-0021669b91bc!
	I0319 20:41:21.158148       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"071ad68d-5ddb-4392-ba9f-ab05da3e1e3c", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-385240_acf47112-9fa7-4021-9c0f-0021669b91bc became leader
	I0319 20:41:21.254958       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-385240_acf47112-9fa7-4021-9c0f-0021669b91bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nv288
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 describe pod metrics-server-57f55c9bc5-nv288
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-385240 describe pod metrics-server-57f55c9bc5-nv288: exit status 1 (69.260411ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nv288" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-385240 describe pod metrics-server-57f55c9bc5-nv288: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (536.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (238.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-414130 -n no-preload-414130
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-19 20:54:55.714245085 +0000 UTC m=+6619.834864860
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-414130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-414130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.365µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-414130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-414130 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-414130 logs -n 25: (1.408408865s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:54 UTC | 19 Mar 24 20:54 UTC |
	| start   | -p newest-cni-587652 --memory=2200 --alsologtostderr   | newest-cni-587652            | jenkins | v1.32.0 | 19 Mar 24 20:54 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:54:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:54:50.689955   64588 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:54:50.690242   64588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:54:50.690257   64588 out.go:304] Setting ErrFile to fd 2...
	I0319 20:54:50.690265   64588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:54:50.690537   64588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:54:50.691167   64588 out.go:298] Setting JSON to false
	I0319 20:54:50.692105   64588 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9389,"bootTime":1710872302,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:54:50.692164   64588 start.go:139] virtualization: kvm guest
	I0319 20:54:50.694449   64588 out.go:177] * [newest-cni-587652] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:54:50.696112   64588 notify.go:220] Checking for updates...
	I0319 20:54:50.696124   64588 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:54:50.697519   64588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:54:50.698750   64588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:54:50.700013   64588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:54:50.701280   64588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:54:50.702539   64588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:54:50.704094   64588 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:54:50.704178   64588 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:54:50.704275   64588 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:54:50.704372   64588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:54:50.745033   64588 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:54:50.746169   64588 start.go:297] selected driver: kvm2
	I0319 20:54:50.746180   64588 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:54:50.746189   64588 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:54:50.746874   64588 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:54:50.746947   64588 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:54:50.764722   64588 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:54:50.764774   64588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0319 20:54:50.764806   64588 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0319 20:54:50.765164   64588 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0319 20:54:50.765237   64588 cni.go:84] Creating CNI manager for ""
	I0319 20:54:50.765255   64588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:54:50.765269   64588 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 20:54:50.765353   64588 start.go:340] cluster config:
	{Name:newest-cni-587652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-587652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:54:50.765458   64588 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:54:50.767737   64588 out.go:177] * Starting "newest-cni-587652" primary control-plane node in "newest-cni-587652" cluster
	I0319 20:54:50.769027   64588 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:54:50.769056   64588 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0319 20:54:50.769062   64588 cache.go:56] Caching tarball of preloaded images
	I0319 20:54:50.769144   64588 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:54:50.769156   64588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on crio
	I0319 20:54:50.769256   64588 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/config.json ...
	I0319 20:54:50.769274   64588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/newest-cni-587652/config.json: {Name:mk8fc0e531bc0eb7e6e6dd1bab20986c19d15ae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:54:50.769426   64588 start.go:360] acquireMachinesLock for newest-cni-587652: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:54:50.769463   64588 start.go:364] duration metric: took 20.483µs to acquireMachinesLock for "newest-cni-587652"
	I0319 20:54:50.769485   64588 start.go:93] Provisioning new machine with config: &{Name:newest-cni-587652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-587652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:54:50.769572   64588 start.go:125] createHost starting for "" (driver="kvm2")
	I0319 20:54:50.771339   64588 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0319 20:54:50.771503   64588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:54:50.771546   64588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:54:50.785456   64588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0319 20:54:50.785882   64588 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:54:50.786474   64588 main.go:141] libmachine: Using API Version  1
	I0319 20:54:50.786509   64588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:54:50.786800   64588 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:54:50.786997   64588 main.go:141] libmachine: (newest-cni-587652) Calling .GetMachineName
	I0319 20:54:50.787167   64588 main.go:141] libmachine: (newest-cni-587652) Calling .DriverName
	I0319 20:54:50.787320   64588 start.go:159] libmachine.API.Create for "newest-cni-587652" (driver="kvm2")
	I0319 20:54:50.787357   64588 client.go:168] LocalClient.Create starting
	I0319 20:54:50.787388   64588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem
	I0319 20:54:50.787430   64588 main.go:141] libmachine: Decoding PEM data...
	I0319 20:54:50.787447   64588 main.go:141] libmachine: Parsing certificate...
	I0319 20:54:50.787498   64588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem
	I0319 20:54:50.787517   64588 main.go:141] libmachine: Decoding PEM data...
	I0319 20:54:50.787528   64588 main.go:141] libmachine: Parsing certificate...
	I0319 20:54:50.787545   64588 main.go:141] libmachine: Running pre-create checks...
	I0319 20:54:50.787554   64588 main.go:141] libmachine: (newest-cni-587652) Calling .PreCreateCheck
	I0319 20:54:50.787920   64588 main.go:141] libmachine: (newest-cni-587652) Calling .GetConfigRaw
	I0319 20:54:50.788366   64588 main.go:141] libmachine: Creating machine...
	I0319 20:54:50.788383   64588 main.go:141] libmachine: (newest-cni-587652) Calling .Create
	I0319 20:54:50.788509   64588 main.go:141] libmachine: (newest-cni-587652) Creating KVM machine...
	I0319 20:54:50.789724   64588 main.go:141] libmachine: (newest-cni-587652) DBG | found existing default KVM network
	I0319 20:54:50.791313   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:50.791125   64611 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:50:1f} reservation:<nil>}
	I0319 20:54:50.792151   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:50.792089   64611 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:cb:5d} reservation:<nil>}
	I0319 20:54:50.793277   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:50.793193   64611 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e69a0}
	I0319 20:54:50.793301   64588 main.go:141] libmachine: (newest-cni-587652) DBG | created network xml: 
	I0319 20:54:50.793311   64588 main.go:141] libmachine: (newest-cni-587652) DBG | <network>
	I0319 20:54:50.793320   64588 main.go:141] libmachine: (newest-cni-587652) DBG |   <name>mk-newest-cni-587652</name>
	I0319 20:54:50.793333   64588 main.go:141] libmachine: (newest-cni-587652) DBG |   <dns enable='no'/>
	I0319 20:54:50.793351   64588 main.go:141] libmachine: (newest-cni-587652) DBG |   
	I0319 20:54:50.793365   64588 main.go:141] libmachine: (newest-cni-587652) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0319 20:54:50.793375   64588 main.go:141] libmachine: (newest-cni-587652) DBG |     <dhcp>
	I0319 20:54:50.793382   64588 main.go:141] libmachine: (newest-cni-587652) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0319 20:54:50.793387   64588 main.go:141] libmachine: (newest-cni-587652) DBG |     </dhcp>
	I0319 20:54:50.793392   64588 main.go:141] libmachine: (newest-cni-587652) DBG |   </ip>
	I0319 20:54:50.793396   64588 main.go:141] libmachine: (newest-cni-587652) DBG |   
	I0319 20:54:50.793401   64588 main.go:141] libmachine: (newest-cni-587652) DBG | </network>
	I0319 20:54:50.793406   64588 main.go:141] libmachine: (newest-cni-587652) DBG | 
	I0319 20:54:50.799010   64588 main.go:141] libmachine: (newest-cni-587652) DBG | trying to create private KVM network mk-newest-cni-587652 192.168.61.0/24...
	I0319 20:54:50.875681   64588 main.go:141] libmachine: (newest-cni-587652) DBG | private KVM network mk-newest-cni-587652 192.168.61.0/24 created
	I0319 20:54:50.875718   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:50.875655   64611 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:54:50.875731   64588 main.go:141] libmachine: (newest-cni-587652) Setting up store path in /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652 ...
	I0319 20:54:50.875752   64588 main.go:141] libmachine: (newest-cni-587652) Building disk image from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 20:54:50.875767   64588 main.go:141] libmachine: (newest-cni-587652) Downloading /home/jenkins/minikube-integration/18453-10028/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0319 20:54:51.101098   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:51.100973   64611 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/id_rsa...
	I0319 20:54:51.326662   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:51.326532   64611 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/newest-cni-587652.rawdisk...
	I0319 20:54:51.326722   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Writing magic tar header
	I0319 20:54:51.326816   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Writing SSH key tar header
	I0319 20:54:51.326851   64588 main.go:141] libmachine: (newest-cni-587652) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652 (perms=drwx------)
	I0319 20:54:51.326865   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:51.326693   64611 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652 ...
	I0319 20:54:51.326883   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652
	I0319 20:54:51.326893   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube/machines
	I0319 20:54:51.326903   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:54:51.326913   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18453-10028
	I0319 20:54:51.326931   64588 main.go:141] libmachine: (newest-cni-587652) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube/machines (perms=drwxr-xr-x)
	I0319 20:54:51.326941   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0319 20:54:51.326956   64588 main.go:141] libmachine: (newest-cni-587652) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028/.minikube (perms=drwxr-xr-x)
	I0319 20:54:51.326972   64588 main.go:141] libmachine: (newest-cni-587652) Setting executable bit set on /home/jenkins/minikube-integration/18453-10028 (perms=drwxrwxr-x)
	I0319 20:54:51.326985   64588 main.go:141] libmachine: (newest-cni-587652) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0319 20:54:51.326997   64588 main.go:141] libmachine: (newest-cni-587652) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0319 20:54:51.327005   64588 main.go:141] libmachine: (newest-cni-587652) Creating domain...
	I0319 20:54:51.327013   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home/jenkins
	I0319 20:54:51.327025   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Checking permissions on dir: /home
	I0319 20:54:51.327039   64588 main.go:141] libmachine: (newest-cni-587652) DBG | Skipping /home - not owner
	I0319 20:54:51.328223   64588 main.go:141] libmachine: (newest-cni-587652) define libvirt domain using xml: 
	I0319 20:54:51.328276   64588 main.go:141] libmachine: (newest-cni-587652) <domain type='kvm'>
	I0319 20:54:51.328288   64588 main.go:141] libmachine: (newest-cni-587652)   <name>newest-cni-587652</name>
	I0319 20:54:51.328303   64588 main.go:141] libmachine: (newest-cni-587652)   <memory unit='MiB'>2200</memory>
	I0319 20:54:51.328321   64588 main.go:141] libmachine: (newest-cni-587652)   <vcpu>2</vcpu>
	I0319 20:54:51.328333   64588 main.go:141] libmachine: (newest-cni-587652)   <features>
	I0319 20:54:51.328343   64588 main.go:141] libmachine: (newest-cni-587652)     <acpi/>
	I0319 20:54:51.328365   64588 main.go:141] libmachine: (newest-cni-587652)     <apic/>
	I0319 20:54:51.328377   64588 main.go:141] libmachine: (newest-cni-587652)     <pae/>
	I0319 20:54:51.328386   64588 main.go:141] libmachine: (newest-cni-587652)     
	I0319 20:54:51.328397   64588 main.go:141] libmachine: (newest-cni-587652)   </features>
	I0319 20:54:51.328430   64588 main.go:141] libmachine: (newest-cni-587652)   <cpu mode='host-passthrough'>
	I0319 20:54:51.328454   64588 main.go:141] libmachine: (newest-cni-587652)   
	I0319 20:54:51.328462   64588 main.go:141] libmachine: (newest-cni-587652)   </cpu>
	I0319 20:54:51.328473   64588 main.go:141] libmachine: (newest-cni-587652)   <os>
	I0319 20:54:51.328484   64588 main.go:141] libmachine: (newest-cni-587652)     <type>hvm</type>
	I0319 20:54:51.328495   64588 main.go:141] libmachine: (newest-cni-587652)     <boot dev='cdrom'/>
	I0319 20:54:51.328507   64588 main.go:141] libmachine: (newest-cni-587652)     <boot dev='hd'/>
	I0319 20:54:51.328519   64588 main.go:141] libmachine: (newest-cni-587652)     <bootmenu enable='no'/>
	I0319 20:54:51.328530   64588 main.go:141] libmachine: (newest-cni-587652)   </os>
	I0319 20:54:51.328540   64588 main.go:141] libmachine: (newest-cni-587652)   <devices>
	I0319 20:54:51.328567   64588 main.go:141] libmachine: (newest-cni-587652)     <disk type='file' device='cdrom'>
	I0319 20:54:51.328596   64588 main.go:141] libmachine: (newest-cni-587652)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/boot2docker.iso'/>
	I0319 20:54:51.328611   64588 main.go:141] libmachine: (newest-cni-587652)       <target dev='hdc' bus='scsi'/>
	I0319 20:54:51.328622   64588 main.go:141] libmachine: (newest-cni-587652)       <readonly/>
	I0319 20:54:51.328632   64588 main.go:141] libmachine: (newest-cni-587652)     </disk>
	I0319 20:54:51.328643   64588 main.go:141] libmachine: (newest-cni-587652)     <disk type='file' device='disk'>
	I0319 20:54:51.328658   64588 main.go:141] libmachine: (newest-cni-587652)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0319 20:54:51.328675   64588 main.go:141] libmachine: (newest-cni-587652)       <source file='/home/jenkins/minikube-integration/18453-10028/.minikube/machines/newest-cni-587652/newest-cni-587652.rawdisk'/>
	I0319 20:54:51.328689   64588 main.go:141] libmachine: (newest-cni-587652)       <target dev='hda' bus='virtio'/>
	I0319 20:54:51.328695   64588 main.go:141] libmachine: (newest-cni-587652)     </disk>
	I0319 20:54:51.328707   64588 main.go:141] libmachine: (newest-cni-587652)     <interface type='network'>
	I0319 20:54:51.328717   64588 main.go:141] libmachine: (newest-cni-587652)       <source network='mk-newest-cni-587652'/>
	I0319 20:54:51.328724   64588 main.go:141] libmachine: (newest-cni-587652)       <model type='virtio'/>
	I0319 20:54:51.328737   64588 main.go:141] libmachine: (newest-cni-587652)     </interface>
	I0319 20:54:51.328747   64588 main.go:141] libmachine: (newest-cni-587652)     <interface type='network'>
	I0319 20:54:51.328770   64588 main.go:141] libmachine: (newest-cni-587652)       <source network='default'/>
	I0319 20:54:51.328797   64588 main.go:141] libmachine: (newest-cni-587652)       <model type='virtio'/>
	I0319 20:54:51.328810   64588 main.go:141] libmachine: (newest-cni-587652)     </interface>
	I0319 20:54:51.328819   64588 main.go:141] libmachine: (newest-cni-587652)     <serial type='pty'>
	I0319 20:54:51.328830   64588 main.go:141] libmachine: (newest-cni-587652)       <target port='0'/>
	I0319 20:54:51.328838   64588 main.go:141] libmachine: (newest-cni-587652)     </serial>
	I0319 20:54:51.328848   64588 main.go:141] libmachine: (newest-cni-587652)     <console type='pty'>
	I0319 20:54:51.328857   64588 main.go:141] libmachine: (newest-cni-587652)       <target type='serial' port='0'/>
	I0319 20:54:51.328880   64588 main.go:141] libmachine: (newest-cni-587652)     </console>
	I0319 20:54:51.328898   64588 main.go:141] libmachine: (newest-cni-587652)     <rng model='virtio'>
	I0319 20:54:51.328918   64588 main.go:141] libmachine: (newest-cni-587652)       <backend model='random'>/dev/random</backend>
	I0319 20:54:51.328952   64588 main.go:141] libmachine: (newest-cni-587652)     </rng>
	I0319 20:54:51.328974   64588 main.go:141] libmachine: (newest-cni-587652)     
	I0319 20:54:51.328986   64588 main.go:141] libmachine: (newest-cni-587652)     
	I0319 20:54:51.328998   64588 main.go:141] libmachine: (newest-cni-587652)   </devices>
	I0319 20:54:51.329009   64588 main.go:141] libmachine: (newest-cni-587652) </domain>
	I0319 20:54:51.329020   64588 main.go:141] libmachine: (newest-cni-587652) 
	I0319 20:54:51.334925   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:cc:9f:79 in network default
	I0319 20:54:51.335589   64588 main.go:141] libmachine: (newest-cni-587652) Ensuring networks are active...
	I0319 20:54:51.335604   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:51.336344   64588 main.go:141] libmachine: (newest-cni-587652) Ensuring network default is active
	I0319 20:54:51.336697   64588 main.go:141] libmachine: (newest-cni-587652) Ensuring network mk-newest-cni-587652 is active
	I0319 20:54:51.337105   64588 main.go:141] libmachine: (newest-cni-587652) Getting domain xml...
	I0319 20:54:51.337732   64588 main.go:141] libmachine: (newest-cni-587652) Creating domain...
	I0319 20:54:52.578332   64588 main.go:141] libmachine: (newest-cni-587652) Waiting to get IP...
	I0319 20:54:52.579309   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:52.579751   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:52.579781   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:52.579731   64611 retry.go:31] will retry after 268.103673ms: waiting for machine to come up
	I0319 20:54:52.849095   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:52.849667   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:52.849725   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:52.849645   64611 retry.go:31] will retry after 349.058612ms: waiting for machine to come up
	I0319 20:54:53.200351   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:53.200931   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:53.200962   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:53.200890   64611 retry.go:31] will retry after 433.854779ms: waiting for machine to come up
	I0319 20:54:53.636359   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:53.636863   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:53.636888   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:53.636815   64611 retry.go:31] will retry after 373.076594ms: waiting for machine to come up
	I0319 20:54:54.011303   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:54.011755   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:54.011782   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:54.011710   64611 retry.go:31] will retry after 609.559022ms: waiting for machine to come up
	I0319 20:54:54.622325   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:54.622792   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:54.622826   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:54.622743   64611 retry.go:31] will retry after 796.999009ms: waiting for machine to come up
	I0319 20:54:55.421552   64588 main.go:141] libmachine: (newest-cni-587652) DBG | domain newest-cni-587652 has defined MAC address 52:54:00:13:03:1e in network mk-newest-cni-587652
	I0319 20:54:55.422102   64588 main.go:141] libmachine: (newest-cni-587652) DBG | unable to find current IP address of domain newest-cni-587652 in network mk-newest-cni-587652
	I0319 20:54:55.422136   64588 main.go:141] libmachine: (newest-cni-587652) DBG | I0319 20:54:55.422031   64611 retry.go:31] will retry after 807.954004ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.439725134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881696439701182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d654f07-f665-453b-90bc-c25fd838f50b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.440476002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0af2cc02-9ad7-496c-aa01-20127c6fe81a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.440530292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0af2cc02-9ad7-496c-aa01-20127c6fe81a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.440702454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0af2cc02-9ad7-496c-aa01-20127c6fe81a name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.488313644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca2052dd-516f-48e6-8e73-bbd0b8aaeabd name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.488385799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca2052dd-516f-48e6-8e73-bbd0b8aaeabd name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.489745891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9913df3b-1df9-4909-9227-4b4b51388f7f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.490293358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881696490267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9913df3b-1df9-4909-9227-4b4b51388f7f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.490830294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72453aaf-bbd2-43d6-a7db-58787dcefac8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.491259506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72453aaf-bbd2-43d6-a7db-58787dcefac8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.491469063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72453aaf-bbd2-43d6-a7db-58787dcefac8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.531576151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd57c5e0-7a56-4147-9965-7a8e52cc1dee name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.531649346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd57c5e0-7a56-4147-9965-7a8e52cc1dee name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.533030620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce14ed85-b2e6-4974-a14d-647c9c7cff2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.533415397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881696533393824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce14ed85-b2e6-4974-a14d-647c9c7cff2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.534017534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b62a9a60-abf4-4dbe-a310-7cad39f1db0e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.534071642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b62a9a60-abf4-4dbe-a310-7cad39f1db0e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.534249239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b62a9a60-abf4-4dbe-a310-7cad39f1db0e name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.573633915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9df8f63b-a703-4021-8bf4-b6a72390fba9 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.573806435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9df8f63b-a703-4021-8bf4-b6a72390fba9 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.575649059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae58b846-00de-4cff-a5cd-4914b24e17fb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.576138974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881696576105206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae58b846-00de-4cff-a5cd-4914b24e17fb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.576819722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49aa91e6-6e19-4b22-bf85-2f5e4a9f6193 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.576902263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49aa91e6-6e19-4b22-bf85-2f5e4a9f6193 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:56 no-preload-414130 crio[710]: time="2024-03-19 20:54:56.577136666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c,PodSandboxId:5474f6961a0019aec1cba9b342ac713b283374ae7d7342c589acb8feb9687204,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710880913784403435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f9e4db1-704f-4e62-816c-c4e1a9e70ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 9217f1e0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33,PodSandboxId:f41e04b92463c13d00e8325e4f9b0f7911936ef69cb4f1d41def94f5003d8306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912864808473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jtdrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1199d0b5-8f7b-47ca-bdd4-af092b6150ca,},Annotations:map[string]string{io.kubernetes.container.hash: 588435cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d,PodSandboxId:9e896e800537311c6b61aaca85fc92d55d731026f32b29b8d3d71b4a1178fec6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710880912326475999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm8cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c
50b962-ed13-4511-8bef-2a2657f26276,},Annotations:map[string]string{io.kubernetes.container.hash: c0333687,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233,PodSandboxId:5757d6c7b01c6e6ccc8006c6b809c0c3650e3cc31b9d5a70f1c1a7e853486413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1710880911739790161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7m4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06239fd6-3053-4a7b-9a73-62886b59fa6a,},Annotations:map[string]string{io.kubernetes.container.hash: c7a23e3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52,PodSandboxId:faa7e83f965c8d25ccefc1703ed9b052fb2888c0b75ae1c0edbac13be5948522,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1710880892069383731,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5770f2895db5bf0f7197939d73721b15,},Annotations:map[string]string{io.kubernetes.container.hash: 50edec97,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c,PodSandboxId:e516c7e2a536e08098f872091aca95f4d64455188778abb2d9638459450222a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1710880892092330752,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b63ff8109d6dceda30dac6065446c32,},Annotations:map[string]string{io.kubernetes.container.hash: 72c110c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0,PodSandboxId:fa7c3e5894b44b4fc42d07f2fab1613280f326ab1d5e3f1976938f9c84859d50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1710880892012911174,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60115988efd12a25be1d9eccda362138,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f,PodSandboxId:cfa6b9652d347ac64260ad6add88aa335776af720c1ee440cb246eda94084d1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1710880892016263051,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-414130,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367fb688ce35df12d609fff66da3fca7,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49aa91e6-6e19-4b22-bf85-2f5e4a9f6193 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ddf2de435243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   5474f6961a001       storage-provisioner
	282175d575751       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   f41e04b92463c       coredns-7db6d8ff4d-jtdrs
	f55feb03feb19       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   9e896e8005373       coredns-7db6d8ff4d-jm8cl
	d1b92d6f7f1ca       3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8   13 minutes ago      Running             kube-proxy                0                   5757d6c7b01c6       kube-proxy-m7m4h
	693eb66b9864a       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   13 minutes ago      Running             kube-apiserver            2                   e516c7e2a536e       kube-apiserver-no-preload-414130
	f1b8ef909c169       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   13 minutes ago      Running             etcd                      2                   faa7e83f965c8       etcd-no-preload-414130
	0ea3891af1838       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   13 minutes ago      Running             kube-controller-manager   2                   cfa6b9652d347       kube-controller-manager-no-preload-414130
	00bbe82194b56       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   13 minutes ago      Running             kube-scheduler            2                   fa7c3e5894b44       kube-scheduler-no-preload-414130
	
	
	==> coredns [282175d57575158137711119c6a358a70e8aaaeae01845ed5996287456c80b33] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f55feb03feb195d1028681a52e4b5b7ecafbdf8e2f9b650ec9d401d2470fd69d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-414130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-414130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce
	                    minikube.k8s.io/name=no-preload-414130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Mar 2024 20:41:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-414130
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Mar 2024 20:54:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Mar 2024 20:52:09 +0000   Tue, 19 Mar 2024 20:41:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Mar 2024 20:52:09 +0000   Tue, 19 Mar 2024 20:41:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Mar 2024 20:52:09 +0000   Tue, 19 Mar 2024 20:41:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Mar 2024 20:52:09 +0000   Tue, 19 Mar 2024 20:41:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.29
	  Hostname:    no-preload-414130
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b75323fa4d64092b46f8ef8b0374374
	  System UUID:                2b75323f-a4d6-4092-b46f-8ef8b0374374
	  Boot ID:                    fda99eb1-b91c-4a0c-8d33-8aab37267322
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-beta.0
	  Kube-Proxy Version:         v1.30.0-beta.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jm8cl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-jtdrs                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-414130                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-414130             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-414130    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-m7m4h                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-414130             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-27n2b              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-414130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-414130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-414130 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-414130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-414130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-414130 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-414130 event: Registered Node no-preload-414130 in Controller
	
	
	==> dmesg <==
	[  +0.042034] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.888028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.566903] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.754827] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.504022] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.064437] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065025] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.194322] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.153356] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.316979] systemd-fstab-generator[694]: Ignoring "noauto" option for root device
	[ +17.644577] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.072749] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.818056] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +5.588931] kauditd_printk_skb: 94 callbacks suppressed
	[  +7.363506] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.225918] kauditd_printk_skb: 20 callbacks suppressed
	[Mar19 20:41] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.885807] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +7.053887] systemd-fstab-generator[4148]: Ignoring "noauto" option for root device
	[  +0.088351] kauditd_printk_skb: 55 callbacks suppressed
	[ +13.789090] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.082567] systemd-fstab-generator[4376]: Ignoring "noauto" option for root device
	[Mar19 20:42] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f1b8ef909c169d09854ddcd2e14ba2eaeef6f42a231428736782711a19285c52] <==
	{"level":"info","ts":"2024-03-19T20:41:32.567477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de switched to configuration voters=(16145973629461328350)"}
	{"level":"info","ts":"2024-03-19T20:41:32.574486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f8680c7cbbe1f1ff","local-member-id":"e012057890ddf1de","added-peer-id":"e012057890ddf1de","added-peer-peer-urls":["https://192.168.72.29:2380"]}
	{"level":"info","ts":"2024-03-19T20:41:32.567534Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.29:2380"}
	{"level":"info","ts":"2024-03-19T20:41:32.575627Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.29:2380"}
	{"level":"info","ts":"2024-03-19T20:41:33.424863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:33.425099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:33.425268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de received MsgPreVoteResp from e012057890ddf1de at term 1"}
	{"level":"info","ts":"2024-03-19T20:41:33.425739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de became candidate at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.425811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de received MsgVoteResp from e012057890ddf1de at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.425857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e012057890ddf1de became leader at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.425897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e012057890ddf1de elected leader e012057890ddf1de at term 2"}
	{"level":"info","ts":"2024-03-19T20:41:33.427696Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.428861Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e012057890ddf1de","local-member-attributes":"{Name:no-preload-414130 ClientURLs:[https://192.168.72.29:2379]}","request-path":"/0/members/e012057890ddf1de/attributes","cluster-id":"f8680c7cbbe1f1ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-19T20:41:33.429147Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:41:33.429616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f8680c7cbbe1f1ff","local-member-id":"e012057890ddf1de","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.429712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.429759Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-19T20:41:33.429804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-19T20:41:33.432689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.29:2379"}
	{"level":"info","ts":"2024-03-19T20:41:33.435682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-19T20:41:33.445474Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-19T20:41:33.445532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-19T20:51:33.488593Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":709}
	{"level":"info","ts":"2024-03-19T20:51:33.498133Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":709,"took":"8.934914ms","hash":2907040022,"current-db-size-bytes":2285568,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2285568,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-03-19T20:51:33.498239Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2907040022,"revision":709,"compact-revision":-1}
	
	
	==> kernel <==
	 20:54:57 up 18 min,  0 users,  load average: 0.30, 0.19, 0.11
	Linux no-preload-414130 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [693eb66b9864a5bab7aec73df63fc8cde7e847f14774b7dce6bfed2c2460246c] <==
	I0319 20:49:36.074333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:51:35.076485       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:51:35.076767       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0319 20:51:36.077612       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:51:36.077686       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:51:36.077696       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:51:36.077751       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:51:36.077795       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:51:36.079070       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:52:36.078313       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:52:36.078396       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:52:36.078707       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:52:36.080054       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:52:36.080093       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:52:36.080100       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:54:36.079079       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:54:36.079177       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0319 20:54:36.079186       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0319 20:54:36.080281       1 handler_proxy.go:93] no RequestInfo found in the context
	E0319 20:54:36.080421       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0319 20:54:36.080462       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0ea3891af183864a4dcd3cccd2102e3943578785cd1103d77272ea5aaf738c0f] <==
	I0319 20:49:21.069617       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:49:50.590504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:49:51.078615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:50:20.597437       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:50:21.088504       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:50:50.604811       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:50:51.101288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:51:20.611262       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:51:21.111858       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:51:50.617614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:51:51.120340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:52:20.623751       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:52:21.133202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:52:49.065076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="332.582µs"
	E0319 20:52:50.633197       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:52:51.142519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0319 20:53:04.072477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="88.233µs"
	E0319 20:53:20.639397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:53:21.155098       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:53:50.645700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:53:51.165074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:54:20.651673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:54:21.174922       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0319 20:54:50.659406       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0319 20:54:51.184884       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d1b92d6f7f1ca63826ca7b7c5fd612606d74e542aecc84990ccba24b74770233] <==
	I0319 20:41:52.144116       1 server_linux.go:69] "Using iptables proxy"
	I0319 20:41:52.213413       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.29"]
	I0319 20:41:52.342237       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0319 20:41:52.342309       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0319 20:41:52.342326       1 server_linux.go:165] "Using iptables Proxier"
	I0319 20:41:52.349912       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0319 20:41:52.350228       1 server.go:872] "Version info" version="v1.30.0-beta.0"
	I0319 20:41:52.350270       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 20:41:52.356060       1 config.go:192] "Starting service config controller"
	I0319 20:41:52.357244       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0319 20:41:52.357356       1 config.go:101] "Starting endpoint slice config controller"
	I0319 20:41:52.357388       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0319 20:41:52.357648       1 config.go:319] "Starting node config controller"
	I0319 20:41:52.357682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0319 20:41:52.458181       1 shared_informer.go:320] Caches are synced for node config
	I0319 20:41:52.464152       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0319 20:41:52.464193       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [00bbe82194b5648a4f16560803f6c954781e0b82891ca3e67fdc989342fd0db0] <==
	W0319 20:41:35.093646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:35.093673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:41:35.093717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 20:41:35.093680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:35.093665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 20:41:35.093738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0319 20:41:35.974662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 20:41:35.974761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0319 20:41:36.003584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:36.003659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:36.028813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 20:41:36.028891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0319 20:41:36.083265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0319 20:41:36.083341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0319 20:41:36.128330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 20:41:36.128613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0319 20:41:36.236226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:36.236300       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:36.241701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 20:41:36.241759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0319 20:41:36.468347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0319 20:41:36.468405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 20:41:36.492173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0319 20:41:36.492267       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0319 20:41:39.586399       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 20:54:29 no-preload-414130 kubelet[4155]: E0319 20:54:29.042064    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:29 no-preload-414130 kubelet[4155]: E0319 20:54:29.042074    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:31 no-preload-414130 kubelet[4155]: E0319 20:54:31.042667    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:31 no-preload-414130 kubelet[4155]: E0319 20:54:31.042728    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:31 no-preload-414130 kubelet[4155]: E0319 20:54:31.042736    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:31 no-preload-414130 kubelet[4155]: E0319 20:54:31.045489    4155 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-27n2b" podUID="2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2"
	Mar 19 20:54:38 no-preload-414130 kubelet[4155]: E0319 20:54:38.095551    4155 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 19 20:54:38 no-preload-414130 kubelet[4155]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 19 20:54:38 no-preload-414130 kubelet[4155]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 19 20:54:38 no-preload-414130 kubelet[4155]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 19 20:54:38 no-preload-414130 kubelet[4155]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 19 20:54:41 no-preload-414130 kubelet[4155]: E0319 20:54:41.042708    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:41 no-preload-414130 kubelet[4155]: E0319 20:54:41.042778    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:41 no-preload-414130 kubelet[4155]: E0319 20:54:41.042786    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:44 no-preload-414130 kubelet[4155]: E0319 20:54:44.042437    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:44 no-preload-414130 kubelet[4155]: E0319 20:54:44.042507    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:44 no-preload-414130 kubelet[4155]: E0319 20:54:44.042515    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:44 no-preload-414130 kubelet[4155]: E0319 20:54:44.044279    4155 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-27n2b" podUID="2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2"
	Mar 19 20:54:55 no-preload-414130 kubelet[4155]: E0319 20:54:55.042623    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:55 no-preload-414130 kubelet[4155]: E0319 20:54:55.043097    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:55 no-preload-414130 kubelet[4155]: E0319 20:54:55.043151    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:55 no-preload-414130 kubelet[4155]: E0319 20:54:55.045464    4155 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-27n2b" podUID="2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2"
	Mar 19 20:54:57 no-preload-414130 kubelet[4155]: E0319 20:54:57.042553    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:57 no-preload-414130 kubelet[4155]: E0319 20:54:57.042615    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 19 20:54:57 no-preload-414130 kubelet[4155]: E0319 20:54:57.042622    4155 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	
	
	==> storage-provisioner [5ddf2de435243cd3f06c7d85f34cb179c52ebba36bff7f7899faf3708a20fe1c] <==
	I0319 20:41:53.906238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 20:41:53.921209       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 20:41:53.921274       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 20:41:53.934619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 20:41:53.934769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-414130_c816ac63-b66a-4ad5-a87f-6278ef1e14a7!
	I0319 20:41:53.935550       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"472fabba-9618-45f6-b6f3-92f3c84ff7af", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-414130_c816ac63-b66a-4ad5-a87f-6278ef1e14a7 became leader
	I0319 20:41:54.035171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-414130_c816ac63-b66a-4ad5-a87f-6278ef1e14a7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-414130 -n no-preload-414130
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-414130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-27n2b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-414130 describe pod metrics-server-569cc877fc-27n2b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-414130 describe pod metrics-server-569cc877fc-27n2b: exit status 1 (66.131479ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-27n2b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-414130 describe pod metrics-server-569cc877fc-27n2b: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (238.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (116.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
E0319 20:54:30.844221   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.28:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.28:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (277.955794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-159022" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-159022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-159022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.507µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-159022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (236.431387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-159022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-159022 logs -n 25: (1.587894622s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:24 UTC | 19 Mar 24 20:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:25 UTC | 19 Mar 24 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-853797                           | kubernetes-upgrade-853797    | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:26 UTC |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:26 UTC | 19 Mar 24 20:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-414130             | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC | 19 Mar 24 20:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-414130                                   | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-421660            | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:28 UTC | 19 Mar 24 20:29 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-428153                              | cert-expiration-428153       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	| delete  | -p                                                     | disable-driver-mounts-502023 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:29 UTC |
	|         | disable-driver-mounts-502023                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC | 19 Mar 24 20:30 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159022        | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-414130                  | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-414130 --memory=2200                     | no-preload-414130            | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:41 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-385240  | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC | 19 Mar 24 20:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-421660                 | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-421660                                  | embed-certs-421660           | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159022             | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC | 19 Mar 24 20:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-159022                              | old-k8s-version-159022       | jenkins | v1.32.0 | 19 Mar 24 20:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-385240       | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-385240 | jenkins | v1.32.0 | 19 Mar 24 20:33 UTC | 19 Mar 24 20:41 UTC |
	|         | default-k8s-diff-port-385240                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 20:33:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 20:33:00.489344   60008 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:33:00.489594   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489603   60008 out.go:304] Setting ErrFile to fd 2...
	I0319 20:33:00.489607   60008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:33:00.489787   60008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:33:00.490297   60008 out.go:298] Setting JSON to false
	I0319 20:33:00.491188   60008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8078,"bootTime":1710872302,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:33:00.491245   60008 start.go:139] virtualization: kvm guest
	I0319 20:33:00.493588   60008 out.go:177] * [default-k8s-diff-port-385240] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:33:00.495329   60008 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:33:00.496506   60008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:33:00.495369   60008 notify.go:220] Checking for updates...
	I0319 20:33:00.499210   60008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:33:00.500494   60008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:33:00.501820   60008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:33:00.503200   60008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:33:00.504837   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:33:00.505191   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.505266   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.519674   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0319 20:33:00.520123   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.520634   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.520656   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.520945   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.521132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.521364   60008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:33:00.521629   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:33:00.521660   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:33:00.535764   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0319 20:33:00.536105   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:33:00.536564   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:33:00.536583   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:33:00.536890   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:33:00.537079   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:33:00.572160   60008 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 20:33:00.573517   60008 start.go:297] selected driver: kvm2
	I0319 20:33:00.573530   60008 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.573663   60008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:33:00.574335   60008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.574423   60008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 20:33:00.588908   60008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 20:33:00.589283   60008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:33:00.589354   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:33:00.589375   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:33:00.589419   60008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:33:00.589532   60008 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 20:33:00.591715   60008 out.go:177] * Starting "default-k8s-diff-port-385240" primary control-plane node in "default-k8s-diff-port-385240" cluster
	I0319 20:32:58.292485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:01.364553   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:00.593043   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:33:00.593084   60008 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 20:33:00.593094   60008 cache.go:56] Caching tarball of preloaded images
	I0319 20:33:00.593156   60008 preload.go:173] Found /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0319 20:33:00.593166   60008 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0319 20:33:00.593281   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:33:00.593454   60008 start.go:360] acquireMachinesLock for default-k8s-diff-port-385240: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:33:07.444550   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:10.516480   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:16.596485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:19.668501   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:25.748504   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:28.820525   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:34.900508   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:37.972545   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:44.052478   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:47.124492   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:53.204484   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:33:56.276536   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:02.356552   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:05.428529   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:11.508540   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:14.580485   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:20.660521   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:23.732555   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:29.812516   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:32.884574   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:38.964472   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:42.036583   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:48.116547   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:51.188507   59019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.29:22: connect: no route to host
	I0319 20:34:54.193037   59415 start.go:364] duration metric: took 3m51.108134555s to acquireMachinesLock for "embed-certs-421660"
	I0319 20:34:54.193108   59415 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:34:54.193120   59415 fix.go:54] fixHost starting: 
	I0319 20:34:54.193458   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:34:54.193487   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:34:54.208614   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0319 20:34:54.209078   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:34:54.209506   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:34:54.209527   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:34:54.209828   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:34:54.209992   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:34:54.210117   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:34:54.211626   59415 fix.go:112] recreateIfNeeded on embed-certs-421660: state=Stopped err=<nil>
	I0319 20:34:54.211661   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	W0319 20:34:54.211820   59415 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:34:54.213989   59415 out.go:177] * Restarting existing kvm2 VM for "embed-certs-421660" ...
	I0319 20:34:54.190431   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:34:54.190483   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.190783   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:34:54.190809   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:34:54.191021   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:34:54.192901   59019 machine.go:97] duration metric: took 4m37.398288189s to provisionDockerMachine
	I0319 20:34:54.192939   59019 fix.go:56] duration metric: took 4m37.41948201s for fixHost
	I0319 20:34:54.192947   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 4m37.419503815s
	W0319 20:34:54.192970   59019 start.go:713] error starting host: provision: host is not running
	W0319 20:34:54.193060   59019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0319 20:34:54.193071   59019 start.go:728] Will try again in 5 seconds ...
	I0319 20:34:54.215391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Start
	I0319 20:34:54.215559   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring networks are active...
	I0319 20:34:54.216249   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network default is active
	I0319 20:34:54.216543   59415 main.go:141] libmachine: (embed-certs-421660) Ensuring network mk-embed-certs-421660 is active
	I0319 20:34:54.216902   59415 main.go:141] libmachine: (embed-certs-421660) Getting domain xml...
	I0319 20:34:54.217595   59415 main.go:141] libmachine: (embed-certs-421660) Creating domain...
	I0319 20:34:55.407058   59415 main.go:141] libmachine: (embed-certs-421660) Waiting to get IP...
	I0319 20:34:55.407855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.408280   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.408343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.408247   60323 retry.go:31] will retry after 202.616598ms: waiting for machine to come up
	I0319 20:34:55.612753   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.613313   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.613341   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.613247   60323 retry.go:31] will retry after 338.618778ms: waiting for machine to come up
	I0319 20:34:55.953776   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:55.954230   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:55.954259   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:55.954164   60323 retry.go:31] will retry after 389.19534ms: waiting for machine to come up
	I0319 20:34:56.344417   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.344855   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.344886   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.344822   60323 retry.go:31] will retry after 555.697854ms: waiting for machine to come up
	I0319 20:34:56.902547   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:56.902990   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:56.903017   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:56.902955   60323 retry.go:31] will retry after 702.649265ms: waiting for machine to come up
	I0319 20:34:57.606823   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:57.607444   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:57.607484   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:57.607388   60323 retry.go:31] will retry after 814.886313ms: waiting for machine to come up
	I0319 20:34:59.194634   59019 start.go:360] acquireMachinesLock for no-preload-414130: {Name:mk40947b31effb7c3f1078cbd662c574a0260f3d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0319 20:34:58.424559   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:58.425066   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:58.425088   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:58.425011   60323 retry.go:31] will retry after 948.372294ms: waiting for machine to come up
	I0319 20:34:59.375490   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:34:59.375857   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:34:59.375884   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:34:59.375809   60323 retry.go:31] will retry after 1.206453994s: waiting for machine to come up
	I0319 20:35:00.584114   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:00.584548   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:00.584572   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:00.584496   60323 retry.go:31] will retry after 1.200177378s: waiting for machine to come up
	I0319 20:35:01.786803   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:01.787139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:01.787167   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:01.787085   60323 retry.go:31] will retry after 1.440671488s: waiting for machine to come up
	I0319 20:35:03.229775   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:03.230179   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:03.230216   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:03.230146   60323 retry.go:31] will retry after 2.073090528s: waiting for machine to come up
	I0319 20:35:05.305427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:05.305904   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:05.305930   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:05.305859   60323 retry.go:31] will retry after 3.463824423s: waiting for machine to come up
	I0319 20:35:08.773517   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:08.773911   59415 main.go:141] libmachine: (embed-certs-421660) DBG | unable to find current IP address of domain embed-certs-421660 in network mk-embed-certs-421660
	I0319 20:35:08.773938   59415 main.go:141] libmachine: (embed-certs-421660) DBG | I0319 20:35:08.773873   60323 retry.go:31] will retry after 4.159170265s: waiting for machine to come up
	I0319 20:35:12.937475   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937965   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has current primary IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.937979   59415 main.go:141] libmachine: (embed-certs-421660) Found IP for machine: 192.168.50.108
	I0319 20:35:12.937987   59415 main.go:141] libmachine: (embed-certs-421660) Reserving static IP address...
	I0319 20:35:12.938372   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.938400   59415 main.go:141] libmachine: (embed-certs-421660) DBG | skip adding static IP to network mk-embed-certs-421660 - found existing host DHCP lease matching {name: "embed-certs-421660", mac: "52:54:00:38:07:af", ip: "192.168.50.108"}
	I0319 20:35:12.938412   59415 main.go:141] libmachine: (embed-certs-421660) Reserved static IP address: 192.168.50.108
	I0319 20:35:12.938435   59415 main.go:141] libmachine: (embed-certs-421660) Waiting for SSH to be available...
	I0319 20:35:12.938448   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Getting to WaitForSSH function...
	I0319 20:35:12.940523   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.940897   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:12.940932   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:12.941037   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH client type: external
	I0319 20:35:12.941069   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa (-rw-------)
	I0319 20:35:12.941102   59415 main.go:141] libmachine: (embed-certs-421660) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:12.941116   59415 main.go:141] libmachine: (embed-certs-421660) DBG | About to run SSH command:
	I0319 20:35:12.941128   59415 main.go:141] libmachine: (embed-certs-421660) DBG | exit 0
	I0319 20:35:14.265612   59621 start.go:364] duration metric: took 3m52.940707164s to acquireMachinesLock for "old-k8s-version-159022"
	I0319 20:35:14.265681   59621 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:14.265689   59621 fix.go:54] fixHost starting: 
	I0319 20:35:14.266110   59621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:14.266146   59621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:14.284370   59621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0319 20:35:14.284756   59621 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:14.285275   59621 main.go:141] libmachine: Using API Version  1
	I0319 20:35:14.285296   59621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:14.285592   59621 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:14.285797   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:14.285936   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetState
	I0319 20:35:14.287461   59621 fix.go:112] recreateIfNeeded on old-k8s-version-159022: state=Stopped err=<nil>
	I0319 20:35:14.287487   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	W0319 20:35:14.287650   59621 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:14.290067   59621 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-159022" ...
	I0319 20:35:13.068386   59415 main.go:141] libmachine: (embed-certs-421660) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:13.068756   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetConfigRaw
	I0319 20:35:13.069421   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.071751   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072101   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.072133   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.072393   59415 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/config.json ...
	I0319 20:35:13.072557   59415 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:13.072574   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:13.072781   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.075005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075343   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.075369   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.075522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.075678   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075816   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.075973   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.076134   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.076364   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.076382   59415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:13.188983   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:13.189017   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189291   59415 buildroot.go:166] provisioning hostname "embed-certs-421660"
	I0319 20:35:13.189319   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.189503   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.191881   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192190   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.192210   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.192389   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.192550   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192696   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.192818   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.192989   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.193145   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.193159   59415 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-421660 && echo "embed-certs-421660" | sudo tee /etc/hostname
	I0319 20:35:13.326497   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-421660
	
	I0319 20:35:13.326524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.329344   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.329765   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.329979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.330179   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330372   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.330547   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.330753   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.330928   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.330943   59415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-421660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-421660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-421660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:13.454265   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:13.454297   59415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:13.454320   59415 buildroot.go:174] setting up certificates
	I0319 20:35:13.454334   59415 provision.go:84] configureAuth start
	I0319 20:35:13.454348   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetMachineName
	I0319 20:35:13.454634   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:13.457258   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457692   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.457723   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.457834   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.460123   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460436   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.460463   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.460587   59415 provision.go:143] copyHostCerts
	I0319 20:35:13.460643   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:13.460652   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:13.460719   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:13.460815   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:13.460822   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:13.460846   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:13.460917   59415 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:13.460924   59415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:13.460945   59415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:13.461004   59415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.embed-certs-421660 san=[127.0.0.1 192.168.50.108 embed-certs-421660 localhost minikube]
	I0319 20:35:13.553348   59415 provision.go:177] copyRemoteCerts
	I0319 20:35:13.553399   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:13.553424   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.555729   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556036   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.556071   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.556199   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.556406   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.556579   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.556725   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:13.642780   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0319 20:35:13.670965   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:13.698335   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:13.724999   59415 provision.go:87] duration metric: took 270.652965ms to configureAuth
	I0319 20:35:13.725022   59415 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:13.725174   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:13.725235   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:13.727653   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.727969   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:13.727988   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:13.728186   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:13.728410   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728581   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:13.728783   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:13.728960   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:13.729113   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:13.729130   59415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:14.012527   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:14.012554   59415 machine.go:97] duration metric: took 939.982813ms to provisionDockerMachine
	I0319 20:35:14.012568   59415 start.go:293] postStartSetup for "embed-certs-421660" (driver="kvm2")
	I0319 20:35:14.012582   59415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:14.012616   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.012969   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:14.012996   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.015345   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015706   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.015759   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.015864   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.016069   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.016269   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.016409   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.105236   59415 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:14.110334   59415 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:14.110363   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:14.110435   59415 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:14.110534   59415 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:14.110623   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:14.120911   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:14.148171   59415 start.go:296] duration metric: took 135.590484ms for postStartSetup
	I0319 20:35:14.148209   59415 fix.go:56] duration metric: took 19.955089617s for fixHost
	I0319 20:35:14.148234   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.150788   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151139   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.151165   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.151331   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.151514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151667   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.151784   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.151953   59415 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:14.152125   59415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0319 20:35:14.152138   59415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:14.265435   59415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880514.234420354
	
	I0319 20:35:14.265467   59415 fix.go:216] guest clock: 1710880514.234420354
	I0319 20:35:14.265478   59415 fix.go:229] Guest: 2024-03-19 20:35:14.234420354 +0000 UTC Remote: 2024-03-19 20:35:14.148214105 +0000 UTC m=+251.208119911 (delta=86.206249ms)
	I0319 20:35:14.265507   59415 fix.go:200] guest clock delta is within tolerance: 86.206249ms
	I0319 20:35:14.265516   59415 start.go:83] releasing machines lock for "embed-certs-421660", held for 20.072435424s
	I0319 20:35:14.265554   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.265868   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:14.268494   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268846   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.268874   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.268979   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269589   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269751   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:14.269833   59415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:14.269884   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.269956   59415 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:14.269972   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:14.272604   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272771   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.272978   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273005   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273137   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273140   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:14.273160   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:14.273316   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:14.273337   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273473   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:14.273514   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:14.273685   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.273738   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:14.358033   59415 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:14.385511   59415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:14.542052   59415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:14.549672   59415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:14.549747   59415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:14.569110   59415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:14.569137   59415 start.go:494] detecting cgroup driver to use...
	I0319 20:35:14.569193   59415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:14.586644   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:14.601337   59415 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:14.601407   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:14.616158   59415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:14.631754   59415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:14.746576   59415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:14.902292   59415 docker.go:233] disabling docker service ...
	I0319 20:35:14.902353   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:14.920787   59415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:14.938865   59415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:15.078791   59415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:15.214640   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:15.242992   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:15.264698   59415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:15.264755   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.276750   59415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:15.276817   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.288643   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.300368   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.318906   59415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:15.338660   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.351908   59415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.372022   59415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:15.384124   59415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:15.395206   59415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:15.395268   59415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:15.411193   59415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:15.422031   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:15.572313   59415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:15.730316   59415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:15.730389   59415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:15.738539   59415 start.go:562] Will wait 60s for crictl version
	I0319 20:35:15.738600   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:35:15.743107   59415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:15.788582   59415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:15.788666   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.819444   59415 ssh_runner.go:195] Run: crio --version
	I0319 20:35:15.859201   59415 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:14.291762   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .Start
	I0319 20:35:14.291950   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring networks are active...
	I0319 20:35:14.292754   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network default is active
	I0319 20:35:14.293240   59621 main.go:141] libmachine: (old-k8s-version-159022) Ensuring network mk-old-k8s-version-159022 is active
	I0319 20:35:14.293606   59621 main.go:141] libmachine: (old-k8s-version-159022) Getting domain xml...
	I0319 20:35:14.294280   59621 main.go:141] libmachine: (old-k8s-version-159022) Creating domain...
	I0319 20:35:15.543975   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting to get IP...
	I0319 20:35:15.544846   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.545239   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.545299   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.545208   60444 retry.go:31] will retry after 309.079427ms: waiting for machine to come up
	I0319 20:35:15.855733   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:15.856149   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:15.856179   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:15.856109   60444 retry.go:31] will retry after 357.593592ms: waiting for machine to come up
	I0319 20:35:16.215759   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.216273   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.216302   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.216222   60444 retry.go:31] will retry after 324.702372ms: waiting for machine to come up
	I0319 20:35:15.860492   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetIP
	I0319 20:35:15.863655   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864032   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:15.864063   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:15.864303   59415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:15.870600   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:15.885694   59415 kubeadm.go:877] updating cluster {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:15.885833   59415 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:15.885890   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:15.924661   59415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:15.924736   59415 ssh_runner.go:195] Run: which lz4
	I0319 20:35:15.929595   59415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:15.934980   59415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:15.935014   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:17.673355   59415 crio.go:462] duration metric: took 1.743798593s to copy over tarball
	I0319 20:35:17.673428   59415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:16.542460   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:16.542967   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:16.543000   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:16.542921   60444 retry.go:31] will retry after 529.519085ms: waiting for machine to come up
	I0319 20:35:17.074538   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.075051   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.075080   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.075013   60444 retry.go:31] will retry after 614.398928ms: waiting for machine to come up
	I0319 20:35:17.690791   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:17.691263   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:17.691292   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:17.691207   60444 retry.go:31] will retry after 949.214061ms: waiting for machine to come up
	I0319 20:35:18.642501   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:18.643076   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:18.643102   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:18.643003   60444 retry.go:31] will retry after 1.057615972s: waiting for machine to come up
	I0319 20:35:19.702576   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:19.703064   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:19.703098   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:19.703014   60444 retry.go:31] will retry after 1.439947205s: waiting for machine to come up
	I0319 20:35:21.144781   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:21.145136   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:21.145169   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:21.145112   60444 retry.go:31] will retry after 1.377151526s: waiting for machine to come up
	I0319 20:35:20.169596   59415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49612841s)
	I0319 20:35:20.169629   59415 crio.go:469] duration metric: took 2.496240167s to extract the tarball
	I0319 20:35:20.169639   59415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:20.208860   59415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:20.261040   59415 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:35:20.261063   59415 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:35:20.261071   59415 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.29.3 crio true true} ...
	I0319 20:35:20.261162   59415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-421660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:20.261227   59415 ssh_runner.go:195] Run: crio config
	I0319 20:35:20.311322   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:20.311346   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:20.311359   59415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:20.311377   59415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-421660 NodeName:embed-certs-421660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:35:20.311501   59415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-421660"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:20.311560   59415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:35:20.323700   59415 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:20.323776   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:20.334311   59415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0319 20:35:20.352833   59415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:20.372914   59415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0319 20:35:20.391467   59415 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:20.395758   59415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:20.408698   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:20.532169   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:20.550297   59415 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660 for IP: 192.168.50.108
	I0319 20:35:20.550320   59415 certs.go:194] generating shared ca certs ...
	I0319 20:35:20.550339   59415 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:20.550507   59415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:20.550574   59415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:20.550586   59415 certs.go:256] generating profile certs ...
	I0319 20:35:20.550700   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/client.key
	I0319 20:35:20.550774   59415 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key.e5ca10b2
	I0319 20:35:20.550824   59415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key
	I0319 20:35:20.550954   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:20.550988   59415 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:20.551001   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:20.551037   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:20.551070   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:20.551101   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:20.551155   59415 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:20.552017   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:20.583444   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:20.616935   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:20.673499   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:20.707988   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0319 20:35:20.734672   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:35:20.761302   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:20.792511   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/embed-certs-421660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:20.819903   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:20.848361   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:20.878230   59415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:20.908691   59415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:20.930507   59415 ssh_runner.go:195] Run: openssl version
	I0319 20:35:20.937088   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:20.949229   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954299   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.954343   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:20.960610   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:20.972162   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:20.984137   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989211   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.989273   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:20.995436   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:21.007076   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:21.018552   59415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024109   59415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.024146   59415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:21.030344   59415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:21.041615   59415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:21.046986   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:21.053533   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:21.060347   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:21.067155   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:21.074006   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:21.080978   59415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:21.087615   59415 kubeadm.go:391] StartCluster: {Name:embed-certs-421660 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-421660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:21.087695   59415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:21.087745   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.131217   59415 cri.go:89] found id: ""
	I0319 20:35:21.131294   59415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:21.143460   59415 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:21.143487   59415 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:21.143493   59415 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:21.143545   59415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:21.156145   59415 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:21.157080   59415 kubeconfig.go:125] found "embed-certs-421660" server: "https://192.168.50.108:8443"
	I0319 20:35:21.158865   59415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:21.171515   59415 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.108
	I0319 20:35:21.171551   59415 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:21.171561   59415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:21.171607   59415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:21.221962   59415 cri.go:89] found id: ""
	I0319 20:35:21.222028   59415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:21.239149   59415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:21.250159   59415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:21.250185   59415 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:21.250242   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:21.260035   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:21.260107   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:21.270804   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:21.281041   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:21.281106   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:21.291796   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.301883   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:21.301943   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:21.313038   59415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:21.323390   59415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:21.323462   59415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:21.333893   59415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:21.344645   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:21.491596   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.349871   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.592803   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.670220   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:22.802978   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:22.803071   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:22.524618   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:22.525042   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:22.525070   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:22.525002   60444 retry.go:31] will retry after 1.612982479s: waiting for machine to come up
	I0319 20:35:24.139813   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:24.140226   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:24.140249   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:24.140189   60444 retry.go:31] will retry after 2.898240673s: waiting for machine to come up
	I0319 20:35:23.303983   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.803254   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:23.846475   59415 api_server.go:72] duration metric: took 1.043496842s to wait for apiserver process to appear ...
	I0319 20:35:23.846509   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:35:23.846532   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:23.847060   59415 api_server.go:269] stopped: https://192.168.50.108:8443/healthz: Get "https://192.168.50.108:8443/healthz": dial tcp 192.168.50.108:8443: connect: connection refused
	I0319 20:35:24.347376   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.456794   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.456826   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.456841   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.492793   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:35:26.492827   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:35:26.847365   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:26.857297   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:26.857327   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.346936   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.351748   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:35:27.351775   59415 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:35:27.847430   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:35:27.852157   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:35:27.868953   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:35:27.869006   59415 api_server.go:131] duration metric: took 4.022477349s to wait for apiserver health ...
	I0319 20:35:27.869019   59415 cni.go:84] Creating CNI manager for ""
	I0319 20:35:27.869029   59415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:27.871083   59415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:35:27.872669   59415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:35:27.886256   59415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:35:27.912891   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:35:27.928055   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:35:27.928088   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:35:27.928095   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:35:27.928102   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:35:27.928108   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:35:27.928113   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:35:27.928118   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:35:27.928126   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:35:27.928130   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:35:27.928136   59415 system_pods.go:74] duration metric: took 15.221738ms to wait for pod list to return data ...
	I0319 20:35:27.928142   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:35:27.931854   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:35:27.931876   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:35:27.931888   59415 node_conditions.go:105] duration metric: took 3.74189ms to run NodePressure ...
	I0319 20:35:27.931903   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:28.209912   59415 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215315   59415 kubeadm.go:733] kubelet initialised
	I0319 20:35:28.215343   59415 kubeadm.go:734] duration metric: took 5.403708ms waiting for restarted kubelet to initialise ...
	I0319 20:35:28.215353   59415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:28.221636   59415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.230837   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230868   59415 pod_ready.go:81] duration metric: took 9.198177ms for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.230878   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "coredns-76f75df574-9tdfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.230887   59415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.237452   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237472   59415 pod_ready.go:81] duration metric: took 6.569363ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.237479   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "etcd-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.237485   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.242902   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242919   59415 pod_ready.go:81] duration metric: took 5.427924ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.242926   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.242931   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.316859   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316889   59415 pod_ready.go:81] duration metric: took 73.950437ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.316901   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.316908   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:28.717107   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717133   59415 pod_ready.go:81] duration metric: took 400.215265ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:28.717143   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-proxy-qvn26" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:28.717151   59415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.117365   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117403   59415 pod_ready.go:81] duration metric: took 400.242952ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.117416   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.117427   59415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:29.517914   59415 pod_ready.go:97] node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517950   59415 pod_ready.go:81] duration metric: took 400.512217ms for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:35:29.517962   59415 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-421660" hosting pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:29.517974   59415 pod_ready.go:38] duration metric: took 1.302609845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:29.518009   59415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:35:29.534665   59415 ops.go:34] apiserver oom_adj: -16
	I0319 20:35:29.534686   59415 kubeadm.go:591] duration metric: took 8.39118752s to restartPrimaryControlPlane
	I0319 20:35:29.534697   59415 kubeadm.go:393] duration metric: took 8.447087595s to StartCluster
	I0319 20:35:29.534713   59415 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.534814   59415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:29.536379   59415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:29.536620   59415 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:35:29.538397   59415 out.go:177] * Verifying Kubernetes components...
	I0319 20:35:29.536707   59415 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:35:29.536837   59415 config.go:182] Loaded profile config "embed-certs-421660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:29.539696   59415 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-421660"
	I0319 20:35:29.539709   59415 addons.go:69] Setting metrics-server=true in profile "embed-certs-421660"
	I0319 20:35:29.539739   59415 addons.go:234] Setting addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:29.539747   59415 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-421660"
	W0319 20:35:29.539751   59415 addons.go:243] addon metrics-server should already be in state true
	W0319 20:35:29.539757   59415 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:35:29.539782   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539786   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.539700   59415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:29.539700   59415 addons.go:69] Setting default-storageclass=true in profile "embed-certs-421660"
	I0319 20:35:29.539882   59415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-421660"
	I0319 20:35:29.540079   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540098   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540107   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540120   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.540243   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.540282   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.554668   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0319 20:35:29.554742   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0319 20:35:29.554815   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0319 20:35:29.555109   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555148   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555220   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.555703   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555708   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555722   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555726   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.555828   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.555847   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.556077   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556206   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556273   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.556391   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.556627   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556669   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.556753   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.556787   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.559109   59415 addons.go:234] Setting addon default-storageclass=true in "embed-certs-421660"
	W0319 20:35:29.559126   59415 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:35:29.559150   59415 host.go:66] Checking if "embed-certs-421660" exists ...
	I0319 20:35:29.559390   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.559425   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.570567   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0319 20:35:29.571010   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.571467   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.571492   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.571831   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.572018   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.573621   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.575889   59415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:35:29.574300   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0319 20:35:29.574529   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0319 20:35:29.577448   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:35:29.577473   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:35:29.577496   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.577913   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.577957   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.578350   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578382   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.578751   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.578877   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.578901   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.579318   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.579431   59415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:29.579495   59415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:29.579509   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.580582   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581050   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.581074   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.581166   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.581276   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.583314   59415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:29.581522   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.584941   59415 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.584951   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:35:29.584963   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.584980   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.585154   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.587700   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588076   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.588104   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.588289   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.588463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.588614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.588791   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.594347   59415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0319 20:35:29.594626   59415 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:29.595030   59415 main.go:141] libmachine: Using API Version  1
	I0319 20:35:29.595062   59415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:29.595384   59415 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:29.595524   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetState
	I0319 20:35:29.596984   59415 main.go:141] libmachine: (embed-certs-421660) Calling .DriverName
	I0319 20:35:29.597209   59415 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.597224   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:35:29.597238   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHHostname
	I0319 20:35:29.599955   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600427   59415 main.go:141] libmachine: (embed-certs-421660) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:07:af", ip: ""} in network mk-embed-certs-421660: {Iface:virbr2 ExpiryTime:2024-03-19 21:35:06 +0000 UTC Type:0 Mac:52:54:00:38:07:af Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:embed-certs-421660 Clientid:01:52:54:00:38:07:af}
	I0319 20:35:29.600457   59415 main.go:141] libmachine: (embed-certs-421660) DBG | domain embed-certs-421660 has defined IP address 192.168.50.108 and MAC address 52:54:00:38:07:af in network mk-embed-certs-421660
	I0319 20:35:29.600533   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHPort
	I0319 20:35:29.600682   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHKeyPath
	I0319 20:35:29.600829   59415 main.go:141] libmachine: (embed-certs-421660) Calling .GetSSHUsername
	I0319 20:35:29.600926   59415 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/embed-certs-421660/id_rsa Username:docker}
	I0319 20:35:29.719989   59415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:29.737348   59415 node_ready.go:35] waiting up to 6m0s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:29.839479   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:35:29.839994   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:35:29.840016   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:35:29.852112   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:35:29.904335   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:35:29.904358   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:35:29.969646   59415 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:29.969675   59415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:35:30.031528   59415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:35:31.120085   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.280572793s)
	I0319 20:35:31.120135   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120148   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120172   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268019206s)
	I0319 20:35:31.120214   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120229   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120430   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120448   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120457   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120463   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120544   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120564   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120588   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120606   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.120614   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.120758   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120788   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.120827   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.120833   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.120841   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.127070   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.127085   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.127287   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.127301   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.138956   59415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.107385118s)
	I0319 20:35:31.139006   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139027   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139257   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139301   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139319   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139330   59415 main.go:141] libmachine: Making call to close driver server
	I0319 20:35:31.139342   59415 main.go:141] libmachine: (embed-certs-421660) Calling .Close
	I0319 20:35:31.139546   59415 main.go:141] libmachine: (embed-certs-421660) DBG | Closing plugin on server side
	I0319 20:35:31.139550   59415 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:35:31.139564   59415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:35:31.139579   59415 addons.go:470] Verifying addon metrics-server=true in "embed-certs-421660"
	I0319 20:35:31.141587   59415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:35:27.041835   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:27.042328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:27.042357   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:27.042284   60444 retry.go:31] will retry after 3.286702127s: waiting for machine to come up
	I0319 20:35:30.331199   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:30.331637   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | unable to find current IP address of domain old-k8s-version-159022 in network mk-old-k8s-version-159022
	I0319 20:35:30.331662   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | I0319 20:35:30.331598   60444 retry.go:31] will retry after 4.471669127s: waiting for machine to come up
	I0319 20:35:31.142927   59415 addons.go:505] duration metric: took 1.606231661s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:35:31.741584   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.101840   60008 start.go:364] duration metric: took 2m35.508355671s to acquireMachinesLock for "default-k8s-diff-port-385240"
	I0319 20:35:36.101908   60008 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:36.101921   60008 fix.go:54] fixHost starting: 
	I0319 20:35:36.102308   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:36.102352   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:36.118910   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0319 20:35:36.119363   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:36.119926   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:35:36.119957   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:36.120271   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:36.120450   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:36.120614   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:35:36.122085   60008 fix.go:112] recreateIfNeeded on default-k8s-diff-port-385240: state=Stopped err=<nil>
	I0319 20:35:36.122112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	W0319 20:35:36.122284   60008 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:36.124242   60008 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-385240" ...
	I0319 20:35:34.804328   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.804854   59621 main.go:141] libmachine: (old-k8s-version-159022) Found IP for machine: 192.168.61.28
	I0319 20:35:34.804878   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserving static IP address...
	I0319 20:35:34.804901   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has current primary IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.805325   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.805352   59621 main.go:141] libmachine: (old-k8s-version-159022) Reserved static IP address: 192.168.61.28
	I0319 20:35:34.805382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | skip adding static IP to network mk-old-k8s-version-159022 - found existing host DHCP lease matching {name: "old-k8s-version-159022", mac: "52:54:00:be:83:01", ip: "192.168.61.28"}
	I0319 20:35:34.805405   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Getting to WaitForSSH function...
	I0319 20:35:34.805423   59621 main.go:141] libmachine: (old-k8s-version-159022) Waiting for SSH to be available...
	I0319 20:35:34.807233   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807599   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.807642   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.807754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH client type: external
	I0319 20:35:34.807786   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa (-rw-------)
	I0319 20:35:34.807818   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:34.807839   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | About to run SSH command:
	I0319 20:35:34.807858   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | exit 0
	I0319 20:35:34.936775   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:34.937125   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetConfigRaw
	I0319 20:35:34.937685   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:34.940031   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.940449   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.940640   59621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/config.json ...
	I0319 20:35:34.940811   59621 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:34.940827   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:34.941006   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:34.943075   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943441   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:34.943467   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:34.943513   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:34.943653   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943812   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:34.943907   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:34.944048   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:34.944289   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:34.944302   59621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:35.049418   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:35.049443   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049653   59621 buildroot.go:166] provisioning hostname "old-k8s-version-159022"
	I0319 20:35:35.049676   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.049836   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.052555   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.052921   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.052948   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.053092   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.053287   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053436   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.053593   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.053749   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.053955   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.053974   59621 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159022 && echo "old-k8s-version-159022" | sudo tee /etc/hostname
	I0319 20:35:35.172396   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159022
	
	I0319 20:35:35.172445   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.175145   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175465   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.175492   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.175735   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.175937   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176077   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.176204   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.176421   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.176653   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.176683   59621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159022/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:35.290546   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:35.290574   59621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:35.290595   59621 buildroot.go:174] setting up certificates
	I0319 20:35:35.290607   59621 provision.go:84] configureAuth start
	I0319 20:35:35.290618   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetMachineName
	I0319 20:35:35.290903   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:35.293736   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294106   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.294144   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.294293   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.296235   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296553   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.296581   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.296720   59621 provision.go:143] copyHostCerts
	I0319 20:35:35.296778   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:35.296788   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:35.296840   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:35.296941   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:35.296949   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:35.296969   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:35.297031   59621 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:35.297038   59621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:35.297054   59621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:35.297135   59621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159022 san=[127.0.0.1 192.168.61.28 localhost minikube old-k8s-version-159022]
	I0319 20:35:35.382156   59621 provision.go:177] copyRemoteCerts
	I0319 20:35:35.382209   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:35.382231   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.384688   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385011   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.385057   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.385184   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.385371   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.385495   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.385664   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.468119   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:35.494761   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0319 20:35:35.520290   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 20:35:35.546498   59621 provision.go:87] duration metric: took 255.877868ms to configureAuth
	I0319 20:35:35.546534   59621 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:35.546769   59621 config.go:182] Loaded profile config "old-k8s-version-159022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:35:35.546835   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.549473   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.549887   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.549928   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.550089   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.550283   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550450   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.550582   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.550744   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.550943   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.550965   59621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:35.856375   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:35.856401   59621 machine.go:97] duration metric: took 915.578137ms to provisionDockerMachine
	I0319 20:35:35.856413   59621 start.go:293] postStartSetup for "old-k8s-version-159022" (driver="kvm2")
	I0319 20:35:35.856429   59621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:35.856456   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:35.856749   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:35.856778   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.859327   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859702   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.859754   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.859860   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.860040   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.860185   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.860337   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:35.946002   59621 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:35.951084   59621 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:35.951106   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:35.951170   59621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:35.951294   59621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:35.951410   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:35.962854   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:35.990249   59621 start.go:296] duration metric: took 133.822271ms for postStartSetup
	I0319 20:35:35.990288   59621 fix.go:56] duration metric: took 21.724599888s for fixHost
	I0319 20:35:35.990311   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:35.992761   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993107   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:35.993135   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:35.993256   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:35.993458   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993626   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:35.993763   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:35.993955   59621 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:35.994162   59621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I0319 20:35:35.994188   59621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:36.101700   59621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880536.082251645
	
	I0319 20:35:36.101725   59621 fix.go:216] guest clock: 1710880536.082251645
	I0319 20:35:36.101735   59621 fix.go:229] Guest: 2024-03-19 20:35:36.082251645 +0000 UTC Remote: 2024-03-19 20:35:35.990292857 +0000 UTC m=+254.817908758 (delta=91.958788ms)
	I0319 20:35:36.101754   59621 fix.go:200] guest clock delta is within tolerance: 91.958788ms
	I0319 20:35:36.101759   59621 start.go:83] releasing machines lock for "old-k8s-version-159022", held for 21.836104733s
	I0319 20:35:36.101782   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.102024   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:36.104734   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105104   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.105128   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.105327   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105789   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.105979   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .DriverName
	I0319 20:35:36.106034   59621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:36.106083   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.106196   59621 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:36.106219   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHHostname
	I0319 20:35:36.108915   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.108942   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109348   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109382   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:36.109406   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109437   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:36.109539   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109664   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHPort
	I0319 20:35:36.109753   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109823   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHKeyPath
	I0319 20:35:36.109913   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110038   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetSSHUsername
	I0319 20:35:36.110048   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.110160   59621 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/old-k8s-version-159022/id_rsa Username:docker}
	I0319 20:35:36.214576   59621 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:36.221821   59621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:36.369705   59621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:36.379253   59621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:36.379318   59621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:36.397081   59621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:36.397106   59621 start.go:494] detecting cgroup driver to use...
	I0319 20:35:36.397175   59621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:36.418012   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:36.433761   59621 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:36.433816   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:36.449756   59621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:36.465353   59621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:36.599676   59621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:36.766247   59621 docker.go:233] disabling docker service ...
	I0319 20:35:36.766318   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:36.783701   59621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:36.799657   59621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:36.929963   59621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:37.064328   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:37.082332   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:37.105267   59621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0319 20:35:37.105333   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.117449   59621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:37.117522   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.129054   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.141705   59621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:37.153228   59621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:37.165991   59621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:37.176987   59621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:37.177050   59621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:37.194750   59621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:37.206336   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:37.356587   59621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:37.527691   59621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:37.527783   59621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:37.534032   59621 start.go:562] Will wait 60s for crictl version
	I0319 20:35:37.534083   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:37.539268   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:37.585458   59621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:37.585549   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.626478   59621 ssh_runner.go:195] Run: crio --version
	I0319 20:35:37.668459   59621 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0319 20:35:33.742461   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.241937   59415 node_ready.go:53] node "embed-certs-421660" has status "Ready":"False"
	I0319 20:35:36.743420   59415 node_ready.go:49] node "embed-certs-421660" has status "Ready":"True"
	I0319 20:35:36.743447   59415 node_ready.go:38] duration metric: took 7.006070851s for node "embed-certs-421660" to be "Ready" ...
	I0319 20:35:36.743458   59415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:35:36.749810   59415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:36.125778   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Start
	I0319 20:35:36.125974   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring networks are active...
	I0319 20:35:36.126542   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network default is active
	I0319 20:35:36.126934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Ensuring network mk-default-k8s-diff-port-385240 is active
	I0319 20:35:36.127367   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Getting domain xml...
	I0319 20:35:36.128009   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Creating domain...
	I0319 20:35:37.396589   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting to get IP...
	I0319 20:35:37.397626   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398211   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.398294   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.398203   60655 retry.go:31] will retry after 263.730992ms: waiting for machine to come up
	I0319 20:35:37.663811   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664345   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.664379   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.664300   60655 retry.go:31] will retry after 308.270868ms: waiting for machine to come up
	I0319 20:35:37.974625   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975061   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:37.975095   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:37.975027   60655 retry.go:31] will retry after 376.884777ms: waiting for machine to come up
	I0319 20:35:38.353624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354101   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.354129   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.354056   60655 retry.go:31] will retry after 419.389718ms: waiting for machine to come up
	I0319 20:35:38.774777   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775271   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:38.775299   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:38.775224   60655 retry.go:31] will retry after 757.534448ms: waiting for machine to come up
	I0319 20:35:39.534258   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534739   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:39.534766   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:39.534698   60655 retry.go:31] will retry after 921.578914ms: waiting for machine to come up
	I0319 20:35:40.457637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458132   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:40.458154   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:40.458092   60655 retry.go:31] will retry after 1.079774724s: waiting for machine to come up
	I0319 20:35:37.669893   59621 main.go:141] libmachine: (old-k8s-version-159022) Calling .GetIP
	I0319 20:35:37.672932   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673351   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:83:01", ip: ""} in network mk-old-k8s-version-159022: {Iface:virbr3 ExpiryTime:2024-03-19 21:35:27 +0000 UTC Type:0 Mac:52:54:00:be:83:01 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:old-k8s-version-159022 Clientid:01:52:54:00:be:83:01}
	I0319 20:35:37.673381   59621 main.go:141] libmachine: (old-k8s-version-159022) DBG | domain old-k8s-version-159022 has defined IP address 192.168.61.28 and MAC address 52:54:00:be:83:01 in network mk-old-k8s-version-159022
	I0319 20:35:37.673610   59621 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:37.678935   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:37.697644   59621 kubeadm.go:877] updating cluster {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:37.697778   59621 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 20:35:37.697833   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:37.763075   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:37.763153   59621 ssh_runner.go:195] Run: which lz4
	I0319 20:35:37.768290   59621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:37.773545   59621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:37.773576   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0319 20:35:39.901377   59621 crio.go:462] duration metric: took 2.133141606s to copy over tarball
	I0319 20:35:39.901455   59621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:38.759504   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.258580   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:41.539643   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540163   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:41.540192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:41.540113   60655 retry.go:31] will retry after 1.174814283s: waiting for machine to come up
	I0319 20:35:42.716195   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716547   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:42.716576   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:42.716510   60655 retry.go:31] will retry after 1.464439025s: waiting for machine to come up
	I0319 20:35:44.183190   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183673   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:44.183701   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:44.183628   60655 retry.go:31] will retry after 2.304816358s: waiting for machine to come up
	I0319 20:35:43.095177   59621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193691776s)
	I0319 20:35:43.095210   59621 crio.go:469] duration metric: took 3.193804212s to extract the tarball
	I0319 20:35:43.095219   59621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:35:43.139358   59621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:43.179903   59621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0319 20:35:43.179934   59621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:35:43.179980   59621 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.179997   59621 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.180033   59621 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.180044   59621 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.180153   59621 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0319 20:35:43.180190   59621 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.180054   59621 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.180088   59621 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.181614   59621 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0319 20:35:43.181656   59621 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:43.181815   59621 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.181943   59621 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.181955   59621 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.181994   59621 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.181945   59621 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.182046   59621 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.315967   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.323438   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.349992   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.359959   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.369799   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0319 20:35:43.370989   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.383453   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.417962   59621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0319 20:35:43.418010   59621 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.418060   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.425289   59621 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0319 20:35:43.425327   59621 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.425369   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525483   59621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0319 20:35:43.525537   59621 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.525556   59621 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0319 20:35:43.525590   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525592   59621 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0319 20:35:43.525598   59621 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0319 20:35:43.525609   59621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0319 20:35:43.525631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525641   59621 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.525620   59621 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.525670   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.525679   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554535   59621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0319 20:35:43.554578   59621 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.554610   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0319 20:35:43.554631   59621 ssh_runner.go:195] Run: which crictl
	I0319 20:35:43.554683   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0319 20:35:43.554716   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0319 20:35:43.554686   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0319 20:35:43.554784   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0319 20:35:43.554836   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0319 20:35:43.682395   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0319 20:35:43.708803   59621 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0319 20:35:43.708994   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0319 20:35:43.709561   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0319 20:35:43.709625   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0319 20:35:43.715170   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0319 20:35:43.752250   59621 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0319 20:35:44.180318   59621 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:35:44.322268   59621 cache_images.go:92] duration metric: took 1.142314234s to LoadCachedImages
	W0319 20:35:44.322347   59621 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0319 20:35:44.322361   59621 kubeadm.go:928] updating node { 192.168.61.28 8443 v1.20.0 crio true true} ...
	I0319 20:35:44.322494   59621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-159022 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:35:44.322571   59621 ssh_runner.go:195] Run: crio config
	I0319 20:35:44.374464   59621 cni.go:84] Creating CNI manager for ""
	I0319 20:35:44.374499   59621 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:35:44.374514   59621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:35:44.374539   59621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-159022 NodeName:old-k8s-version-159022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0319 20:35:44.374720   59621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-159022"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:35:44.374791   59621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0319 20:35:44.387951   59621 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:35:44.388028   59621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:35:44.399703   59621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0319 20:35:44.421738   59621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:35:44.442596   59621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0319 20:35:44.462640   59621 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I0319 20:35:44.467449   59621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:44.481692   59621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:44.629405   59621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:35:44.650162   59621 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022 for IP: 192.168.61.28
	I0319 20:35:44.650185   59621 certs.go:194] generating shared ca certs ...
	I0319 20:35:44.650200   59621 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:44.650399   59621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:35:44.650474   59621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:35:44.650492   59621 certs.go:256] generating profile certs ...
	I0319 20:35:44.650588   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.key
	I0319 20:35:44.650635   59621 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key.d78c40b4
	I0319 20:35:44.650667   59621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key
	I0319 20:35:44.650771   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:35:44.650804   59621 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:35:44.650813   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:35:44.650841   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:35:44.650864   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:35:44.650883   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:35:44.650923   59621 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:44.651582   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:35:44.681313   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:35:44.709156   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:35:44.736194   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:35:44.781000   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0319 20:35:44.818649   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:35:44.846237   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:35:44.888062   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:35:44.960415   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:35:45.004861   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:35:45.046734   59621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:35:45.073319   59621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:35:45.092025   59621 ssh_runner.go:195] Run: openssl version
	I0319 20:35:45.098070   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:35:45.109701   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115080   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.115135   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:35:45.121661   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:35:45.135854   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:35:45.149702   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.154995   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.155056   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:35:45.161384   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:35:45.173957   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:35:45.186698   59621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191526   59621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.191570   59621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:35:45.197581   59621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:35:45.209797   59621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:35:45.214828   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:35:45.221159   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:35:45.227488   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:35:45.234033   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:35:45.240310   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:35:45.246564   59621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:35:45.252862   59621 kubeadm.go:391] StartCluster: {Name:old-k8s-version-159022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-159022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:35:45.252964   59621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:35:45.253011   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.292764   59621 cri.go:89] found id: ""
	I0319 20:35:45.292861   59621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:35:45.309756   59621 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:35:45.309784   59621 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:35:45.309791   59621 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:35:45.309841   59621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:35:45.324613   59621 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:35:45.326076   59621 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-159022" does not appear in /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:35:45.327161   59621 kubeconfig.go:62] /home/jenkins/minikube-integration/18453-10028/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-159022" cluster setting kubeconfig missing "old-k8s-version-159022" context setting]
	I0319 20:35:45.328566   59621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:35:45.330262   59621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:35:45.342287   59621 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.28
	I0319 20:35:45.342316   59621 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:35:45.342330   59621 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:35:45.342388   59621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:35:45.398700   59621 cri.go:89] found id: ""
	I0319 20:35:45.398805   59621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:35:45.421841   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:35:45.433095   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:35:45.433127   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:35:45.433220   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:35:45.443678   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:35:45.443751   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:35:45.454217   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:35:45.464965   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:35:45.465030   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:35:45.475691   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.487807   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:35:45.487861   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:35:45.499931   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:35:45.514147   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:35:45.514204   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:35:45.528468   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:35:45.540717   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:45.698850   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:43.756917   59415 pod_ready.go:102] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:44.893540   59415 pod_ready.go:92] pod "coredns-76f75df574-9tdfg" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.893576   59415 pod_ready.go:81] duration metric: took 8.143737931s for pod "coredns-76f75df574-9tdfg" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.893592   59415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903602   59415 pod_ready.go:92] pod "etcd-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.903640   59415 pod_ready.go:81] duration metric: took 10.03087ms for pod "etcd-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.903653   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926651   59415 pod_ready.go:92] pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.926682   59415 pod_ready.go:81] duration metric: took 23.020281ms for pod "kube-apiserver-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.926696   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935080   59415 pod_ready.go:92] pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.935113   59415 pod_ready.go:81] duration metric: took 8.409239ms for pod "kube-controller-manager-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.935126   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947241   59415 pod_ready.go:92] pod "kube-proxy-qvn26" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:44.947269   59415 pod_ready.go:81] duration metric: took 12.135421ms for pod "kube-proxy-qvn26" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:44.947280   59415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155416   59415 pod_ready.go:92] pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace has status "Ready":"True"
	I0319 20:35:45.155441   59415 pod_ready.go:81] duration metric: took 208.152938ms for pod "kube-scheduler-embed-certs-421660" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:45.155460   59415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	I0319 20:35:47.165059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:46.490600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491092   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:46.491121   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:46.491050   60655 retry.go:31] will retry after 2.347371858s: waiting for machine to come up
	I0319 20:35:48.841516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.841995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:48.842018   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:48.841956   60655 retry.go:31] will retry after 2.70576525s: waiting for machine to come up
	I0319 20:35:46.644056   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:46.932173   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.083244   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:35:47.177060   59621 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:35:47.177147   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:47.677331   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.177721   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:48.677901   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.177433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.677420   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.177711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:50.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:51.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:49.662363   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.662389   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:51.549431   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549931   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | unable to find current IP address of domain default-k8s-diff-port-385240 in network mk-default-k8s-diff-port-385240
	I0319 20:35:51.549959   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | I0319 20:35:51.549900   60655 retry.go:31] will retry after 3.429745322s: waiting for machine to come up
	I0319 20:35:54.983382   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.983875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Found IP for machine: 192.168.39.77
	I0319 20:35:54.983908   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserving static IP address...
	I0319 20:35:54.983923   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has current primary IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.984212   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.984240   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Reserved static IP address: 192.168.39.77
	I0319 20:35:54.984292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | skip adding static IP to network mk-default-k8s-diff-port-385240 - found existing host DHCP lease matching {name: "default-k8s-diff-port-385240", mac: "52:54:00:46:fd:f0", ip: "192.168.39.77"}
	I0319 20:35:54.984307   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Waiting for SSH to be available...
	I0319 20:35:54.984322   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Getting to WaitForSSH function...
	I0319 20:35:54.986280   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986591   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:54.986624   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:54.986722   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH client type: external
	I0319 20:35:54.986752   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa (-rw-------)
	I0319 20:35:54.986783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:35:54.986796   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | About to run SSH command:
	I0319 20:35:54.986805   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | exit 0
	I0319 20:35:55.112421   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | SSH cmd err, output: <nil>: 
	I0319 20:35:55.112825   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetConfigRaw
	I0319 20:35:55.113456   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.115976   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116349   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.116377   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.116587   60008 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/config.json ...
	I0319 20:35:55.116847   60008 machine.go:94] provisionDockerMachine start ...
	I0319 20:35:55.116874   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:55.117099   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.119475   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.119911   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.119947   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.120112   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.120312   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120478   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.120629   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.120793   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.120970   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.120982   60008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:35:55.229055   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:35:55.229090   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229360   60008 buildroot.go:166] provisioning hostname "default-k8s-diff-port-385240"
	I0319 20:35:55.229390   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.229594   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.232039   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232371   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.232391   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.232574   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.232746   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232866   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.232967   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.233087   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.233251   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.233264   60008 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-385240 && echo "default-k8s-diff-port-385240" | sudo tee /etc/hostname
	I0319 20:35:55.355708   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-385240
	
	I0319 20:35:55.355732   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.358292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358610   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.358641   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.358880   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.359105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359267   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.359415   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.359545   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.359701   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.359724   60008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-385240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-385240/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-385240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:35:55.479083   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:35:55.479109   60008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:35:55.479126   60008 buildroot.go:174] setting up certificates
	I0319 20:35:55.479134   60008 provision.go:84] configureAuth start
	I0319 20:35:55.479143   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetMachineName
	I0319 20:35:55.479433   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:55.482040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482378   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.482408   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.482535   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.484637   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485035   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.485062   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.485212   60008 provision.go:143] copyHostCerts
	I0319 20:35:55.485272   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:35:55.485283   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:35:55.485334   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:35:55.485425   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:35:55.485434   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:35:55.485454   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:35:55.485560   60008 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:35:55.485569   60008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:35:55.485586   60008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:35:55.485642   60008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-385240 san=[127.0.0.1 192.168.39.77 default-k8s-diff-port-385240 localhost minikube]
	I0319 20:35:51.678068   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.177195   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:52.678239   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.177380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:53.677223   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.177180   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:54.677832   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.178134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:55.677904   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.178155   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:56.449710   59019 start.go:364] duration metric: took 57.255031003s to acquireMachinesLock for "no-preload-414130"
	I0319 20:35:56.449774   59019 start.go:96] Skipping create...Using existing machine configuration
	I0319 20:35:56.449786   59019 fix.go:54] fixHost starting: 
	I0319 20:35:56.450187   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:35:56.450225   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:35:56.469771   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0319 20:35:56.470265   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:35:56.470764   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:35:56.470799   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:35:56.471187   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:35:56.471362   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:35:56.471545   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:35:56.473295   59019 fix.go:112] recreateIfNeeded on no-preload-414130: state=Stopped err=<nil>
	I0319 20:35:56.473323   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	W0319 20:35:56.473480   59019 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 20:35:56.475296   59019 out.go:177] * Restarting existing kvm2 VM for "no-preload-414130" ...
	I0319 20:35:56.476767   59019 main.go:141] libmachine: (no-preload-414130) Calling .Start
	I0319 20:35:56.476947   59019 main.go:141] libmachine: (no-preload-414130) Ensuring networks are active...
	I0319 20:35:56.477657   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network default is active
	I0319 20:35:56.478036   59019 main.go:141] libmachine: (no-preload-414130) Ensuring network mk-no-preload-414130 is active
	I0319 20:35:56.478443   59019 main.go:141] libmachine: (no-preload-414130) Getting domain xml...
	I0319 20:35:56.479131   59019 main.go:141] libmachine: (no-preload-414130) Creating domain...
	I0319 20:35:53.663220   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:56.163557   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:35:55.738705   60008 provision.go:177] copyRemoteCerts
	I0319 20:35:55.738779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:35:55.738812   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.741292   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741618   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.741644   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.741835   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.741997   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.742105   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.742260   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:55.828017   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:35:55.854341   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0319 20:35:55.881167   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:35:55.906768   60008 provision.go:87] duration metric: took 427.621358ms to configureAuth
	I0319 20:35:55.906795   60008 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:35:55.907007   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:35:55.907097   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:55.909518   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.909834   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:55.909863   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:55.910008   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:55.910193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:55.910492   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:55.910670   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:55.910835   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:55.910849   60008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:35:56.207010   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:35:56.207036   60008 machine.go:97] duration metric: took 1.090170805s to provisionDockerMachine
	I0319 20:35:56.207049   60008 start.go:293] postStartSetup for "default-k8s-diff-port-385240" (driver="kvm2")
	I0319 20:35:56.207066   60008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:35:56.207086   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.207410   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:35:56.207435   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.210075   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210494   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.210526   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.210671   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.210828   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.211016   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.211167   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.295687   60008 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:35:56.300508   60008 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:35:56.300531   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:35:56.300601   60008 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:35:56.300677   60008 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:35:56.300779   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:35:56.310829   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:35:56.337456   60008 start.go:296] duration metric: took 130.396402ms for postStartSetup
	I0319 20:35:56.337492   60008 fix.go:56] duration metric: took 20.235571487s for fixHost
	I0319 20:35:56.337516   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.339907   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340361   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.340388   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.340552   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.340749   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.340888   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.341040   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.341198   60008 main.go:141] libmachine: Using SSH client type: native
	I0319 20:35:56.341357   60008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0319 20:35:56.341367   60008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:35:56.449557   60008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880556.425761325
	
	I0319 20:35:56.449580   60008 fix.go:216] guest clock: 1710880556.425761325
	I0319 20:35:56.449587   60008 fix.go:229] Guest: 2024-03-19 20:35:56.425761325 +0000 UTC Remote: 2024-03-19 20:35:56.337496936 +0000 UTC m=+175.893119280 (delta=88.264389ms)
	I0319 20:35:56.449619   60008 fix.go:200] guest clock delta is within tolerance: 88.264389ms
	I0319 20:35:56.449624   60008 start.go:83] releasing machines lock for "default-k8s-diff-port-385240", held for 20.347739998s
	I0319 20:35:56.449647   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.449915   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:56.452764   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453172   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.453204   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.453363   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.453973   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454193   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:35:56.454275   60008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:35:56.454328   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.454443   60008 ssh_runner.go:195] Run: cat /version.json
	I0319 20:35:56.454466   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:35:56.457060   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457284   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457383   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457418   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457536   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:56.457555   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:56.457567   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457783   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457831   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:35:56.457977   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:35:56.457995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458126   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:35:56.458139   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.458282   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:35:56.537675   60008 ssh_runner.go:195] Run: systemctl --version
	I0319 20:35:56.564279   60008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:35:56.708113   60008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:35:56.716216   60008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:35:56.716301   60008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:35:56.738625   60008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:35:56.738643   60008 start.go:494] detecting cgroup driver to use...
	I0319 20:35:56.738707   60008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:35:56.756255   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:35:56.772725   60008 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:35:56.772785   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:35:56.793261   60008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:35:56.812368   60008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:35:56.948137   60008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:35:57.139143   60008 docker.go:233] disabling docker service ...
	I0319 20:35:57.139212   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:35:57.156414   60008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:35:57.173655   60008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:35:57.313924   60008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:35:57.459539   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:35:57.478913   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:35:57.506589   60008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:35:57.506663   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.520813   60008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:35:57.520871   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.534524   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.547833   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.568493   60008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:35:57.582367   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.595859   60008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.616441   60008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:35:57.633329   60008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:35:57.648803   60008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:35:57.648886   60008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:35:57.667845   60008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:35:57.680909   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:35:57.825114   60008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:35:57.996033   60008 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:35:57.996118   60008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:35:58.001875   60008 start.go:562] Will wait 60s for crictl version
	I0319 20:35:58.001947   60008 ssh_runner.go:195] Run: which crictl
	I0319 20:35:58.006570   60008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:35:58.060545   60008 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:35:58.060628   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.104858   60008 ssh_runner.go:195] Run: crio --version
	I0319 20:35:58.148992   60008 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0319 20:35:58.150343   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetIP
	I0319 20:35:58.153222   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153634   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:35:58.153663   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:35:58.153924   60008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0319 20:35:58.158830   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:35:58.174622   60008 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:35:58.174760   60008 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 20:35:58.174819   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:35:58.220802   60008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0319 20:35:58.220879   60008 ssh_runner.go:195] Run: which lz4
	I0319 20:35:58.225914   60008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0319 20:35:58.230673   60008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0319 20:35:58.230702   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0319 20:35:59.959612   60008 crio.go:462] duration metric: took 1.733738299s to copy over tarball
	I0319 20:35:59.959694   60008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0319 20:35:56.677479   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.177779   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.677433   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.177286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:58.677259   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.178033   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:59.677592   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.177360   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:00.677584   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.177318   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:35:57.782684   59019 main.go:141] libmachine: (no-preload-414130) Waiting to get IP...
	I0319 20:35:57.783613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:57.784088   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:57.784180   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:57.784077   60806 retry.go:31] will retry after 304.011729ms: waiting for machine to come up
	I0319 20:35:58.089864   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.090398   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.090431   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.090325   60806 retry.go:31] will retry after 268.702281ms: waiting for machine to come up
	I0319 20:35:58.360743   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.361173   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.361201   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.361116   60806 retry.go:31] will retry after 373.34372ms: waiting for machine to come up
	I0319 20:35:58.735810   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:58.736490   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:58.736518   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:58.736439   60806 retry.go:31] will retry after 588.9164ms: waiting for machine to come up
	I0319 20:35:59.327363   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.327908   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.327938   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.327881   60806 retry.go:31] will retry after 623.38165ms: waiting for machine to come up
	I0319 20:35:59.952641   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:35:59.953108   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:35:59.953138   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:35:59.953090   60806 retry.go:31] will retry after 896.417339ms: waiting for machine to come up
	I0319 20:36:00.851032   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:00.851485   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:00.851514   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:00.851435   60806 retry.go:31] will retry after 869.189134ms: waiting for machine to come up
	I0319 20:35:58.168341   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:00.664629   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:02.594104   60008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634373226s)
	I0319 20:36:02.594140   60008 crio.go:469] duration metric: took 2.634502157s to extract the tarball
	I0319 20:36:02.594149   60008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0319 20:36:02.635454   60008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:02.692442   60008 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 20:36:02.692468   60008 cache_images.go:84] Images are preloaded, skipping loading
	I0319 20:36:02.692477   60008 kubeadm.go:928] updating node { 192.168.39.77 8444 v1.29.3 crio true true} ...
	I0319 20:36:02.692613   60008 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-385240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:02.692697   60008 ssh_runner.go:195] Run: crio config
	I0319 20:36:02.749775   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:02.749798   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:02.749809   60008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:02.749828   60008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-385240 NodeName:default-k8s-diff-port-385240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:02.749967   60008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-385240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:02.750034   60008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0319 20:36:02.760788   60008 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:02.760843   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:02.770999   60008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0319 20:36:02.789881   60008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 20:36:02.809005   60008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0319 20:36:02.831122   60008 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:02.835609   60008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:02.850186   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:02.990032   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:03.013831   60008 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240 for IP: 192.168.39.77
	I0319 20:36:03.013858   60008 certs.go:194] generating shared ca certs ...
	I0319 20:36:03.013879   60008 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:03.014072   60008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:03.014125   60008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:03.014137   60008 certs.go:256] generating profile certs ...
	I0319 20:36:03.014256   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.key
	I0319 20:36:03.014325   60008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key.5c19d013
	I0319 20:36:03.014389   60008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key
	I0319 20:36:03.014549   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:03.014602   60008 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:03.014626   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:03.014658   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:03.014691   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:03.014728   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:03.014793   60008 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:03.015673   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:03.070837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:03.115103   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:03.150575   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:03.210934   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0319 20:36:03.254812   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 20:36:03.286463   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:03.315596   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:03.347348   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:03.375837   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:03.407035   60008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:03.439726   60008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:03.461675   60008 ssh_runner.go:195] Run: openssl version
	I0319 20:36:03.468238   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:03.482384   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487682   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.487739   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:03.494591   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:03.509455   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:03.522545   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527556   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.527617   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:03.533925   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:03.546851   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:03.559553   60008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564547   60008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.564595   60008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:03.570824   60008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:03.584339   60008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:03.589542   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:03.595870   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:03.602530   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:03.609086   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:03.615621   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:03.622477   60008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:03.629097   60008 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-385240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-385240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:03.629186   60008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:03.629234   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.674484   60008 cri.go:89] found id: ""
	I0319 20:36:03.674568   60008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:03.686995   60008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:03.687020   60008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:03.687026   60008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:03.687094   60008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:03.702228   60008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:03.703334   60008 kubeconfig.go:125] found "default-k8s-diff-port-385240" server: "https://192.168.39.77:8444"
	I0319 20:36:03.705508   60008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:03.719948   60008 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.77
	I0319 20:36:03.719985   60008 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:03.719997   60008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:03.720073   60008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:03.761557   60008 cri.go:89] found id: ""
	I0319 20:36:03.761619   60008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:03.781849   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:03.793569   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:03.793601   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:03.793652   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:36:03.804555   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:03.804605   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:03.816728   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:36:03.828247   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:03.828318   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:03.840814   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.853100   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:03.853168   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:03.867348   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:36:03.879879   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:03.879944   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:03.893810   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:03.906056   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:04.038911   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.173514   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.134566983s)
	I0319 20:36:05.173547   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.395951   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:05.480821   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:01.678211   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.178205   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:02.677366   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.177299   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:03.678132   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.177311   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:04.677210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.177461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:05.677369   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.177363   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:01.721671   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:01.722186   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:01.722212   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:01.722142   60806 retry.go:31] will retry after 997.299446ms: waiting for machine to come up
	I0319 20:36:02.720561   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:02.721007   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:02.721037   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:02.720958   60806 retry.go:31] will retry after 1.64420318s: waiting for machine to come up
	I0319 20:36:04.367668   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:04.368140   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:04.368179   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:04.368083   60806 retry.go:31] will retry after 1.972606192s: waiting for machine to come up
	I0319 20:36:06.342643   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:06.343192   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:06.343236   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:06.343136   60806 retry.go:31] will retry after 2.056060208s: waiting for machine to come up
	I0319 20:36:03.164447   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.665089   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:05.581797   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:05.581879   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.082565   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.582872   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:06.628756   60008 api_server.go:72] duration metric: took 1.046965637s to wait for apiserver process to appear ...
	I0319 20:36:06.628786   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:06.628808   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:06.629340   60008 api_server.go:269] stopped: https://192.168.39.77:8444/healthz: Get "https://192.168.39.77:8444/healthz": dial tcp 192.168.39.77:8444: connect: connection refused
	I0319 20:36:07.128890   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.231991   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.232024   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.232039   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.280784   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:09.280820   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:09.629356   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:09.660326   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:09.660434   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.128936   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.139305   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0319 20:36:10.139336   60008 api_server.go:103] status: https://192.168.39.77:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0319 20:36:10.629187   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:36:10.635922   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:36:10.654111   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:36:10.654137   60008 api_server.go:131] duration metric: took 4.025345365s to wait for apiserver health ...
	I0319 20:36:10.654146   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:36:10.654154   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:10.656104   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:06.677487   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.177385   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:07.677461   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.177486   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.677978   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.177279   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:09.677265   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.177569   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:10.677831   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:11.178040   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:08.401478   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:08.402086   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:08.402111   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:08.402001   60806 retry.go:31] will retry after 2.487532232s: waiting for machine to come up
	I0319 20:36:10.891005   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:10.891550   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:10.891591   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:10.891503   60806 retry.go:31] will retry after 3.741447035s: waiting for machine to come up
	I0319 20:36:08.163468   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.165537   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:12.661667   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:10.657654   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:10.672795   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:10.715527   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:10.728811   60008 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:10.728850   60008 system_pods.go:61] "coredns-76f75df574-hsdk2" [319e5411-97e4-4021-80d0-b39195acb696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:10.728862   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [d10870b0-a0e1-47aa-baf9-07065c1d9142] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:10.728873   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [4925af1b-328f-42ee-b2ef-78b58fcbdd0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:10.728883   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [6dad1c39-3fbc-4364-9ed8-725c0f518191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:10.728889   60008 system_pods.go:61] "kube-proxy-bwj22" [9cc86566-612e-48bc-94c9-a2dad6978c92] Running
	I0319 20:36:10.728896   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [e9c38443-ea8c-4590-94ca-61077f850b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:10.728904   60008 system_pods.go:61] "metrics-server-57f55c9bc5-ddl2q" [ecb174e4-18b0-459e-afb1-137a1f6bdd67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:10.728919   60008 system_pods.go:61] "storage-provisioner" [95fb27b5-769c-4420-8021-3d97942c9f42] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0319 20:36:10.728931   60008 system_pods.go:74] duration metric: took 13.321799ms to wait for pod list to return data ...
	I0319 20:36:10.728944   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:10.743270   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:10.743312   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:10.743326   60008 node_conditions.go:105] duration metric: took 14.37332ms to run NodePressure ...
	I0319 20:36:10.743348   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:11.028786   60008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034096   60008 kubeadm.go:733] kubelet initialised
	I0319 20:36:11.034115   60008 kubeadm.go:734] duration metric: took 5.302543ms waiting for restarted kubelet to initialise ...
	I0319 20:36:11.034122   60008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:11.040118   60008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.046021   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046048   60008 pod_ready.go:81] duration metric: took 5.906752ms for pod "coredns-76f75df574-hsdk2" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.046060   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "coredns-76f75df574-hsdk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.046069   60008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.051677   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051700   60008 pod_ready.go:81] duration metric: took 5.61463ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.051712   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.051721   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:11.057867   60008 pod_ready.go:97] node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057893   60008 pod_ready.go:81] duration metric: took 6.163114ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	E0319 20:36:11.057905   60008 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-385240" hosting pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-385240" has status "Ready":"False"
	I0319 20:36:11.057912   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:13.065761   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:11.677380   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.178210   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:12.677503   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.177440   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:13.677844   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.178106   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.678026   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.178031   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:15.677522   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:16.177455   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:14.634526   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:14.635125   59019 main.go:141] libmachine: (no-preload-414130) DBG | unable to find current IP address of domain no-preload-414130 in network mk-no-preload-414130
	I0319 20:36:14.635155   59019 main.go:141] libmachine: (no-preload-414130) DBG | I0319 20:36:14.635074   60806 retry.go:31] will retry after 3.841866145s: waiting for machine to come up
	I0319 20:36:14.662669   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.664913   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:15.565340   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:17.567623   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:19.570775   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:16.678137   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.177404   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:17.677511   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.177471   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.677441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.177994   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:19.677451   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.177534   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:20.677308   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.177510   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:18.479276   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.479810   59019 main.go:141] libmachine: (no-preload-414130) Found IP for machine: 192.168.72.29
	I0319 20:36:18.479836   59019 main.go:141] libmachine: (no-preload-414130) Reserving static IP address...
	I0319 20:36:18.479852   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has current primary IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.480232   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.480279   59019 main.go:141] libmachine: (no-preload-414130) DBG | skip adding static IP to network mk-no-preload-414130 - found existing host DHCP lease matching {name: "no-preload-414130", mac: "52:54:00:f0:f0:55", ip: "192.168.72.29"}
	I0319 20:36:18.480297   59019 main.go:141] libmachine: (no-preload-414130) Reserved static IP address: 192.168.72.29
	I0319 20:36:18.480319   59019 main.go:141] libmachine: (no-preload-414130) Waiting for SSH to be available...
	I0319 20:36:18.480336   59019 main.go:141] libmachine: (no-preload-414130) DBG | Getting to WaitForSSH function...
	I0319 20:36:18.482725   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483025   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.483052   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.483228   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH client type: external
	I0319 20:36:18.483262   59019 main.go:141] libmachine: (no-preload-414130) DBG | Using SSH private key: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa (-rw-------)
	I0319 20:36:18.483299   59019 main.go:141] libmachine: (no-preload-414130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0319 20:36:18.483320   59019 main.go:141] libmachine: (no-preload-414130) DBG | About to run SSH command:
	I0319 20:36:18.483373   59019 main.go:141] libmachine: (no-preload-414130) DBG | exit 0
	I0319 20:36:18.612349   59019 main.go:141] libmachine: (no-preload-414130) DBG | SSH cmd err, output: <nil>: 
	I0319 20:36:18.612766   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetConfigRaw
	I0319 20:36:18.613495   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.616106   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616459   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.616498   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.616729   59019 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/config.json ...
	I0319 20:36:18.616940   59019 machine.go:94] provisionDockerMachine start ...
	I0319 20:36:18.616957   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:18.617150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.619316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619599   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.619620   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.619750   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.619895   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620054   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.620166   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.620339   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.620508   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.620521   59019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 20:36:18.729177   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0319 20:36:18.729203   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729483   59019 buildroot.go:166] provisioning hostname "no-preload-414130"
	I0319 20:36:18.729511   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.729728   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.732330   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732633   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.732664   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.732746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.732944   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733087   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.733211   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.733347   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.733513   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.733528   59019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-414130 && echo "no-preload-414130" | sudo tee /etc/hostname
	I0319 20:36:18.857142   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-414130
	
	I0319 20:36:18.857178   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.860040   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860434   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.860465   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.860682   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:18.860907   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861102   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:18.861283   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:18.861462   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:18.861661   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:18.861685   59019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-414130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-414130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-414130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 20:36:18.976726   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 20:36:18.976755   59019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18453-10028/.minikube CaCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18453-10028/.minikube}
	I0319 20:36:18.976776   59019 buildroot.go:174] setting up certificates
	I0319 20:36:18.976789   59019 provision.go:84] configureAuth start
	I0319 20:36:18.976803   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetMachineName
	I0319 20:36:18.977095   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:18.980523   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.980948   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.980976   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.981150   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:18.983394   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983720   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:18.983741   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:18.983887   59019 provision.go:143] copyHostCerts
	I0319 20:36:18.983949   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem, removing ...
	I0319 20:36:18.983959   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem
	I0319 20:36:18.984009   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/ca.pem (1082 bytes)
	I0319 20:36:18.984092   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem, removing ...
	I0319 20:36:18.984099   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem
	I0319 20:36:18.984118   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/cert.pem (1123 bytes)
	I0319 20:36:18.984224   59019 exec_runner.go:144] found /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem, removing ...
	I0319 20:36:18.984237   59019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem
	I0319 20:36:18.984284   59019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18453-10028/.minikube/key.pem (1675 bytes)
	I0319 20:36:18.984348   59019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem org=jenkins.no-preload-414130 san=[127.0.0.1 192.168.72.29 localhost minikube no-preload-414130]
	I0319 20:36:19.241365   59019 provision.go:177] copyRemoteCerts
	I0319 20:36:19.241422   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 20:36:19.241445   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.244060   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244362   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.244388   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.244593   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.244781   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.244956   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.245125   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.332749   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 20:36:19.360026   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0319 20:36:19.386680   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 20:36:19.414673   59019 provision.go:87] duration metric: took 437.87318ms to configureAuth
	I0319 20:36:19.414697   59019 buildroot.go:189] setting minikube options for container-runtime
	I0319 20:36:19.414893   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:36:19.414964   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.417627   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.417949   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.417974   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.418139   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.418351   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418513   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.418687   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.418854   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.419099   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.419120   59019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 20:36:19.712503   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 20:36:19.712538   59019 machine.go:97] duration metric: took 1.095583423s to provisionDockerMachine
	I0319 20:36:19.712554   59019 start.go:293] postStartSetup for "no-preload-414130" (driver="kvm2")
	I0319 20:36:19.712573   59019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 20:36:19.712595   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.712918   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 20:36:19.712953   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.715455   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715779   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.715813   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.715917   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.716098   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.716307   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.716455   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.801402   59019 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 20:36:19.806156   59019 info.go:137] Remote host: Buildroot 2023.02.9
	I0319 20:36:19.806181   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/addons for local assets ...
	I0319 20:36:19.806253   59019 filesync.go:126] Scanning /home/jenkins/minikube-integration/18453-10028/.minikube/files for local assets ...
	I0319 20:36:19.806330   59019 filesync.go:149] local asset: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem -> 173012.pem in /etc/ssl/certs
	I0319 20:36:19.806451   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 20:36:19.818601   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:19.845698   59019 start.go:296] duration metric: took 133.131789ms for postStartSetup
	I0319 20:36:19.845728   59019 fix.go:56] duration metric: took 23.395944884s for fixHost
	I0319 20:36:19.845746   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.848343   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848727   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.848760   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.848909   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.849090   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849256   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.849452   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.849667   59019 main.go:141] libmachine: Using SSH client type: native
	I0319 20:36:19.849843   59019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.29 22 <nil> <nil>}
	I0319 20:36:19.849853   59019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0319 20:36:19.957555   59019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710880579.901731357
	
	I0319 20:36:19.957574   59019 fix.go:216] guest clock: 1710880579.901731357
	I0319 20:36:19.957581   59019 fix.go:229] Guest: 2024-03-19 20:36:19.901731357 +0000 UTC Remote: 2024-03-19 20:36:19.845732308 +0000 UTC m=+363.236094224 (delta=55.999049ms)
	I0319 20:36:19.957612   59019 fix.go:200] guest clock delta is within tolerance: 55.999049ms
	I0319 20:36:19.957625   59019 start.go:83] releasing machines lock for "no-preload-414130", held for 23.507874645s
	I0319 20:36:19.957656   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.957889   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:19.960613   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.960930   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.960957   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.961108   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961627   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961804   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:36:19.961883   59019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 20:36:19.961930   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.961996   59019 ssh_runner.go:195] Run: cat /version.json
	I0319 20:36:19.962022   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:36:19.964593   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.964790   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965034   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965057   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965250   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965368   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:19.965397   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:19.965416   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965529   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:36:19.965611   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965677   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:36:19.965764   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:19.965788   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:36:19.965893   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:36:20.041410   59019 ssh_runner.go:195] Run: systemctl --version
	I0319 20:36:20.067540   59019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 20:36:20.214890   59019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0319 20:36:20.222680   59019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0319 20:36:20.222735   59019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 20:36:20.239981   59019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0319 20:36:20.240003   59019 start.go:494] detecting cgroup driver to use...
	I0319 20:36:20.240066   59019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 20:36:20.260435   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 20:36:20.277338   59019 docker.go:217] disabling cri-docker service (if available) ...
	I0319 20:36:20.277398   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 20:36:20.294069   59019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 20:36:20.309777   59019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 20:36:20.443260   59019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 20:36:20.595476   59019 docker.go:233] disabling docker service ...
	I0319 20:36:20.595552   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 20:36:20.612622   59019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 20:36:20.627717   59019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 20:36:20.790423   59019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 20:36:20.915434   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 20:36:20.932043   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 20:36:20.953955   59019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0319 20:36:20.954026   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.966160   59019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 20:36:20.966230   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.978217   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:20.990380   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.002669   59019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 20:36:21.014880   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.026125   59019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.045239   59019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 20:36:21.056611   59019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 20:36:21.067763   59019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0319 20:36:21.067818   59019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0319 20:36:21.084054   59019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 20:36:21.095014   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:21.237360   59019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 20:36:21.396979   59019 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 20:36:21.397047   59019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 20:36:21.402456   59019 start.go:562] Will wait 60s for crictl version
	I0319 20:36:21.402509   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.406963   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 20:36:21.446255   59019 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0319 20:36:21.446351   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.477273   59019 ssh_runner.go:195] Run: crio --version
	I0319 20:36:21.519196   59019 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0319 20:36:21.520520   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetIP
	I0319 20:36:21.523401   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.523792   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:36:21.523822   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:36:21.524033   59019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0319 20:36:21.528973   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:21.543033   59019 kubeadm.go:877] updating cluster {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 20:36:21.543154   59019 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 20:36:21.543185   59019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 20:36:21.583439   59019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0319 20:36:21.583472   59019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0319 20:36:21.583515   59019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.583551   59019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.583566   59019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.583610   59019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.583622   59019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.583646   59019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.583731   59019 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0319 20:36:21.583766   59019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585216   59019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.585225   59019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.585236   59019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.585210   59019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:21.585247   59019 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0319 20:36:21.585253   59019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.585285   59019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.585297   59019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:19.163241   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:21.165282   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:22.071931   60008 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:24.567506   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.567537   60008 pod_ready.go:81] duration metric: took 13.509614974s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.567553   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573414   60008 pod_ready.go:92] pod "kube-proxy-bwj22" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.573444   60008 pod_ready.go:81] duration metric: took 5.881434ms for pod "kube-proxy-bwj22" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.573457   60008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580429   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:24.580452   60008 pod_ready.go:81] duration metric: took 6.984808ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:24.580463   60008 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:21.677495   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.177292   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:22.677547   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.177181   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:23.677303   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.177535   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:24.677378   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.177241   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:25.677497   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:26.177504   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:21.722682   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.727610   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0319 20:36:21.738933   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.740326   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.772871   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.801213   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.829968   59019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0319 20:36:21.830008   59019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.830053   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.832291   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.945513   59019 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0319 20:36:21.945558   59019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:21.945612   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945618   59019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0319 20:36:21.945651   59019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.945663   59019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0319 20:36:21.945687   59019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.945695   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.945721   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970009   59019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0319 20:36:21.970052   59019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:21.970079   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0319 20:36:21.970090   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970100   59019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0319 20:36:21.970125   59019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:21.970149   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:21.970177   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0319 20:36:21.970167   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0319 20:36:22.062153   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0319 20:36:22.062260   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.063754   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.063840   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:22.091003   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091052   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0319 20:36:22.091104   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:22.091335   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0319 20:36:22.091372   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0319 20:36:22.091382   59019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091405   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0319 20:36:22.091423   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0319 20:36:22.091426   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0319 20:36:22.091475   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:22.096817   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0319 20:36:22.155139   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.155289   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:22.190022   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0319 20:36:22.190072   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.190166   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:22.507872   59019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445006   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.353551966s)
	I0319 20:36:26.445031   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0319 20:36:26.445049   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445063   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (4.289744726s)
	I0319 20:36:26.445095   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445099   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0319 20:36:26.445107   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (4.254920134s)
	I0319 20:36:26.445135   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0319 20:36:26.445176   59019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.937263856s)
	I0319 20:36:26.445228   59019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0319 20:36:26.445254   59019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:26.445296   59019 ssh_runner.go:195] Run: which crictl
	I0319 20:36:23.665322   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.167485   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.588550   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:29.088665   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:26.677333   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.177269   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:27.677273   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.178202   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.678263   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.177346   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:29.677823   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.178013   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:30.677371   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:31.177646   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:28.407117   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (1.96198659s)
	I0319 20:36:28.407156   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0319 20:36:28.407176   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407171   59019 ssh_runner.go:235] Completed: which crictl: (1.961850083s)
	I0319 20:36:28.407212   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0319 20:36:28.407244   59019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:36:30.495567   59019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088296063s)
	I0319 20:36:30.495590   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (2.088358118s)
	I0319 20:36:30.495606   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0319 20:36:30.495617   59019 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0319 20:36:30.495633   59019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495686   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0319 20:36:30.495735   59019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:28.662588   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.163637   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.589581   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:34.090180   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:31.678134   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.178176   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.678118   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.177276   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:33.678018   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.177508   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:34.677186   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.177445   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:35.678113   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:36.177458   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:32.473194   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977482574s)
	I0319 20:36:32.473238   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0319 20:36:32.473263   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:32.473260   59019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.977498716s)
	I0319 20:36:32.473294   59019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0319 20:36:32.473311   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0319 20:36:34.927774   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.454440131s)
	I0319 20:36:34.927813   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0319 20:36:34.927842   59019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:34.927888   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0319 20:36:33.664608   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.163358   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.588459   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:38.590173   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:36.677686   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.177197   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.677489   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.178173   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.678089   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.177514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:39.677923   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.177301   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:40.677431   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.178143   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:37.512011   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.584091271s)
	I0319 20:36:37.512048   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0319 20:36:37.512077   59019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:37.512134   59019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0319 20:36:38.589202   59019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.077040733s)
	I0319 20:36:38.589231   59019 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18453-10028/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0319 20:36:38.589263   59019 cache_images.go:123] Successfully loaded all cached images
	I0319 20:36:38.589278   59019 cache_images.go:92] duration metric: took 17.005785801s to LoadCachedImages
	I0319 20:36:38.589291   59019 kubeadm.go:928] updating node { 192.168.72.29 8443 v1.30.0-beta.0 crio true true} ...
	I0319 20:36:38.589415   59019 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-414130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 20:36:38.589495   59019 ssh_runner.go:195] Run: crio config
	I0319 20:36:38.648312   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:38.648334   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:38.648346   59019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 20:36:38.648366   59019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.29 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-414130 NodeName:no-preload-414130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 20:36:38.648494   59019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-414130"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 20:36:38.648554   59019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0319 20:36:38.665850   59019 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 20:36:38.665928   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 20:36:38.678211   59019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0319 20:36:38.701657   59019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0319 20:36:38.721498   59019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0319 20:36:38.741159   59019 ssh_runner.go:195] Run: grep 192.168.72.29	control-plane.minikube.internal$ /etc/hosts
	I0319 20:36:38.745617   59019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 20:36:38.759668   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:36:38.896211   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:36:38.916698   59019 certs.go:68] Setting up /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130 for IP: 192.168.72.29
	I0319 20:36:38.916720   59019 certs.go:194] generating shared ca certs ...
	I0319 20:36:38.916748   59019 certs.go:226] acquiring lock for ca certs: {Name:mk753cf639d289f9668854fb4e0e29364ad184fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:36:38.916888   59019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key
	I0319 20:36:38.916930   59019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key
	I0319 20:36:38.916943   59019 certs.go:256] generating profile certs ...
	I0319 20:36:38.917055   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.key
	I0319 20:36:38.917134   59019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key.2d7d554c
	I0319 20:36:38.917185   59019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key
	I0319 20:36:38.917324   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem (1338 bytes)
	W0319 20:36:38.917381   59019 certs.go:480] ignoring /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301_empty.pem, impossibly tiny 0 bytes
	I0319 20:36:38.917396   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca-key.pem (1679 bytes)
	I0319 20:36:38.917434   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/ca.pem (1082 bytes)
	I0319 20:36:38.917469   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/cert.pem (1123 bytes)
	I0319 20:36:38.917501   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/certs/key.pem (1675 bytes)
	I0319 20:36:38.917553   59019 certs.go:484] found cert: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem (1708 bytes)
	I0319 20:36:38.918130   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 20:36:38.959630   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 20:36:39.007656   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 20:36:39.046666   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 20:36:39.078901   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 20:36:39.116600   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0319 20:36:39.158517   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 20:36:39.188494   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0319 20:36:39.218770   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/certs/17301.pem --> /usr/share/ca-certificates/17301.pem (1338 bytes)
	I0319 20:36:39.247341   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/ssl/certs/173012.pem --> /usr/share/ca-certificates/173012.pem (1708 bytes)
	I0319 20:36:39.275816   59019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18453-10028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 20:36:39.303434   59019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 20:36:39.326445   59019 ssh_runner.go:195] Run: openssl version
	I0319 20:36:39.333373   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17301.pem && ln -fs /usr/share/ca-certificates/17301.pem /etc/ssl/certs/17301.pem"
	I0319 20:36:39.346280   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352619   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 19:17 /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.352686   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17301.pem
	I0319 20:36:39.359796   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17301.pem /etc/ssl/certs/51391683.0"
	I0319 20:36:39.372480   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/173012.pem && ln -fs /usr/share/ca-certificates/173012.pem /etc/ssl/certs/173012.pem"
	I0319 20:36:39.384231   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389760   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 19:17 /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.389818   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/173012.pem
	I0319 20:36:39.396639   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/173012.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 20:36:39.408887   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 20:36:39.421847   59019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427779   59019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 19:07 /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.427848   59019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 20:36:39.434447   59019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 20:36:39.446945   59019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 20:36:39.452219   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 20:36:39.458729   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 20:36:39.465298   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 20:36:39.471931   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 20:36:39.478810   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 20:36:39.485551   59019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 20:36:39.492084   59019 kubeadm.go:391] StartCluster: {Name:no-preload-414130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-414130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 20:36:39.492210   59019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 20:36:39.492297   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.535094   59019 cri.go:89] found id: ""
	I0319 20:36:39.535157   59019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0319 20:36:39.549099   59019 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0319 20:36:39.549123   59019 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0319 20:36:39.549129   59019 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0319 20:36:39.549179   59019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 20:36:39.560565   59019 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:36:39.561570   59019 kubeconfig.go:125] found "no-preload-414130" server: "https://192.168.72.29:8443"
	I0319 20:36:39.563750   59019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 20:36:39.578708   59019 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.29
	I0319 20:36:39.578746   59019 kubeadm.go:1154] stopping kube-system containers ...
	I0319 20:36:39.578756   59019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0319 20:36:39.578799   59019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 20:36:39.620091   59019 cri.go:89] found id: ""
	I0319 20:36:39.620152   59019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0319 20:36:39.639542   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:36:39.652115   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:36:39.652133   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:36:39.652190   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:36:39.664047   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:36:39.664114   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:36:39.675218   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:36:39.685482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:36:39.685533   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:36:39.695803   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.705482   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:36:39.705538   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:36:39.715747   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:36:39.725260   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:36:39.725324   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:36:39.735246   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:36:39.745069   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:39.862945   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.548185   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.794369   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.891458   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:40.992790   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:36:40.992871   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.493489   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:38.164706   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:40.662753   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:42.663084   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.087924   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:43.087987   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:41.677679   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.178286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.677224   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.177325   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:43.677337   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.178056   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:44.678145   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.177295   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:45.677321   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:46.178002   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:41.993208   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:42.040237   59019 api_server.go:72] duration metric: took 1.047447953s to wait for apiserver process to appear ...
	I0319 20:36:42.040278   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:36:42.040323   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:42.040927   59019 api_server.go:269] stopped: https://192.168.72.29:8443/healthz: Get "https://192.168.72.29:8443/healthz": dial tcp 192.168.72.29:8443: connect: connection refused
	I0319 20:36:42.541457   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.853765   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 20:36:44.853796   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 20:36:44.853834   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:44.967607   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:44.967648   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.040791   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.049359   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.049400   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:45.541024   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:45.545880   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:45.545907   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.041423   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.046075   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.046101   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:46.541147   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:46.546547   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:46.546587   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:44.664041   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.163545   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.040899   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.046413   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.046453   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:47.541051   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:47.547309   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:47.547334   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.040856   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.046293   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 20:36:48.046318   59019 api_server.go:103] status: https://192.168.72.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 20:36:48.540858   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:36:48.545311   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:36:48.551941   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:36:48.551962   59019 api_server.go:131] duration metric: took 6.511678507s to wait for apiserver health ...
	I0319 20:36:48.551970   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:36:48.551976   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:36:48.553824   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:36:45.588011   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:47.589644   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:50.088130   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:46.677759   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:47.177806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:47.177891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:47.224063   59621 cri.go:89] found id: ""
	I0319 20:36:47.224096   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.224107   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:47.224114   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:47.224172   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:47.262717   59621 cri.go:89] found id: ""
	I0319 20:36:47.262748   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.262759   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:47.262765   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:47.262822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:47.305864   59621 cri.go:89] found id: ""
	I0319 20:36:47.305890   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.305898   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:47.305905   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:47.305975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:47.349183   59621 cri.go:89] found id: ""
	I0319 20:36:47.349215   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.349226   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:47.349251   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:47.349324   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:47.385684   59621 cri.go:89] found id: ""
	I0319 20:36:47.385714   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.385724   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:47.385731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:47.385782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:47.422640   59621 cri.go:89] found id: ""
	I0319 20:36:47.422663   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.422671   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:47.422676   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:47.422721   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:47.463766   59621 cri.go:89] found id: ""
	I0319 20:36:47.463789   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.463796   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:47.463811   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:47.463868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:47.505373   59621 cri.go:89] found id: ""
	I0319 20:36:47.505399   59621 logs.go:276] 0 containers: []
	W0319 20:36:47.505409   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:47.505419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:47.505433   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:47.559271   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:47.559298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:47.577232   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:47.577268   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:47.732181   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:47.732215   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:47.732230   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:47.801950   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:47.801987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:50.353889   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:50.367989   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:50.368060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:50.406811   59621 cri.go:89] found id: ""
	I0319 20:36:50.406839   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.406850   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:50.406857   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:50.406902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:50.452196   59621 cri.go:89] found id: ""
	I0319 20:36:50.452220   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.452231   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:50.452238   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:50.452310   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:50.490806   59621 cri.go:89] found id: ""
	I0319 20:36:50.490830   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.490838   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:50.490844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:50.490896   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:50.530417   59621 cri.go:89] found id: ""
	I0319 20:36:50.530442   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.530479   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:50.530486   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:50.530540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:50.570768   59621 cri.go:89] found id: ""
	I0319 20:36:50.570793   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.570803   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:50.570810   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:50.570866   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:50.610713   59621 cri.go:89] found id: ""
	I0319 20:36:50.610737   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.610746   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:50.610752   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:50.610806   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:50.651684   59621 cri.go:89] found id: ""
	I0319 20:36:50.651713   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.651724   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:50.651731   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:50.651787   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:50.695423   59621 cri.go:89] found id: ""
	I0319 20:36:50.695452   59621 logs.go:276] 0 containers: []
	W0319 20:36:50.695461   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:50.695471   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:50.695487   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:50.752534   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:50.752569   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:50.767418   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:50.767441   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:50.855670   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:50.855691   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:50.855703   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:50.926912   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:50.926943   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:48.555094   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:36:48.566904   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:36:48.592246   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:36:48.603249   59019 system_pods.go:59] 8 kube-system pods found
	I0319 20:36:48.603277   59019 system_pods.go:61] "coredns-7db6d8ff4d-t42ph" [bc831304-6e17-452d-8059-22bb46bad525] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0319 20:36:48.603284   59019 system_pods.go:61] "etcd-no-preload-414130" [e2ac0f77-fade-4ac6-a472-58df4040a57d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 20:36:48.603294   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [1128c23f-0cc6-4cd4-aeed-32f3d4570e2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 20:36:48.603300   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [b03747b6-c3ed-44cf-bcc8-dc2cea408100] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 20:36:48.603304   59019 system_pods.go:61] "kube-proxy-dttkh" [23ac1cd6-588b-4745-9c0b-740f9f0e684c] Running
	I0319 20:36:48.603313   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [99fde84c-78d6-4c57-8889-c0d9f3b55a9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 20:36:48.603318   59019 system_pods.go:61] "metrics-server-569cc877fc-jvlnl" [318246fd-b809-40fa-8aff-78eb33ea10fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:36:48.603322   59019 system_pods.go:61] "storage-provisioner" [80470118-b092-4ba1-b830-d6f13173434d] Running
	I0319 20:36:48.603327   59019 system_pods.go:74] duration metric: took 11.054488ms to wait for pod list to return data ...
	I0319 20:36:48.603336   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:36:48.606647   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:36:48.606667   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:36:48.606678   59019 node_conditions.go:105] duration metric: took 3.33741ms to run NodePressure ...
	I0319 20:36:48.606693   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0319 20:36:48.888146   59019 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898053   59019 kubeadm.go:733] kubelet initialised
	I0319 20:36:48.898073   59019 kubeadm.go:734] duration metric: took 9.903203ms waiting for restarted kubelet to initialise ...
	I0319 20:36:48.898082   59019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:36:48.911305   59019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:50.918568   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:49.664061   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.162467   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:52.588174   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:55.088783   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:53.472442   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:53.488058   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:53.488127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:53.527382   59621 cri.go:89] found id: ""
	I0319 20:36:53.527412   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.527423   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:53.527431   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:53.527512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:53.571162   59621 cri.go:89] found id: ""
	I0319 20:36:53.571186   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.571193   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:53.571198   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:53.571240   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:53.615276   59621 cri.go:89] found id: ""
	I0319 20:36:53.615298   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.615307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:53.615314   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:53.615381   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:53.666517   59621 cri.go:89] found id: ""
	I0319 20:36:53.666590   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.666602   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:53.666610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:53.666685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:53.718237   59621 cri.go:89] found id: ""
	I0319 20:36:53.718263   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.718273   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:53.718280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:53.718336   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:53.763261   59621 cri.go:89] found id: ""
	I0319 20:36:53.763286   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.763296   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:53.763304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:53.763396   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:53.804966   59621 cri.go:89] found id: ""
	I0319 20:36:53.804994   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.805004   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:53.805011   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:53.805078   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:53.846721   59621 cri.go:89] found id: ""
	I0319 20:36:53.846750   59621 logs.go:276] 0 containers: []
	W0319 20:36:53.846761   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:53.846772   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:53.846807   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:53.924743   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:53.924779   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:53.941968   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:53.942004   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:54.037348   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:54.037374   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:54.037392   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:54.123423   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:54.123476   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:52.920852   59019 pod_ready.go:102] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:54.419386   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.419410   59019 pod_ready.go:81] duration metric: took 5.508083852s for pod "coredns-7db6d8ff4d-t42ph" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.419420   59019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926059   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.926081   59019 pod_ready.go:81] duration metric: took 506.65554ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.926090   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930519   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:54.930538   59019 pod_ready.go:81] duration metric: took 4.441479ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.930546   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.436969   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.436991   59019 pod_ready.go:81] duration metric: took 506.439126ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.437002   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443096   59019 pod_ready.go:92] pod "kube-proxy-dttkh" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:55.443120   59019 pod_ready.go:81] duration metric: took 6.110267ms for pod "kube-proxy-dttkh" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:55.443132   59019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465091   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:36:56.465114   59019 pod_ready.go:81] duration metric: took 1.021974956s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:56.465123   59019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	I0319 20:36:54.163556   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.663128   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:57.589188   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.093044   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:56.675072   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:56.692932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:56.692999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:56.741734   59621 cri.go:89] found id: ""
	I0319 20:36:56.741760   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.741770   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:56.741778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:56.741840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:56.790710   59621 cri.go:89] found id: ""
	I0319 20:36:56.790738   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.790748   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:56.790755   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:56.790813   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:36:56.843430   59621 cri.go:89] found id: ""
	I0319 20:36:56.843460   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.843469   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:36:56.843477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:36:56.843536   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:36:56.890421   59621 cri.go:89] found id: ""
	I0319 20:36:56.890446   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.890453   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:36:56.890459   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:36:56.890519   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:36:56.931391   59621 cri.go:89] found id: ""
	I0319 20:36:56.931417   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.931428   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:36:56.931434   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:36:56.931488   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:36:56.972326   59621 cri.go:89] found id: ""
	I0319 20:36:56.972349   59621 logs.go:276] 0 containers: []
	W0319 20:36:56.972356   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:36:56.972367   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:36:56.972421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:36:57.012293   59621 cri.go:89] found id: ""
	I0319 20:36:57.012320   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.012330   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:36:57.012339   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:36:57.012404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:36:57.056236   59621 cri.go:89] found id: ""
	I0319 20:36:57.056274   59621 logs.go:276] 0 containers: []
	W0319 20:36:57.056286   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:36:57.056296   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:36:57.056310   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:36:57.071302   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:36:57.071328   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:36:57.166927   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:36:57.166954   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:36:57.166970   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:36:57.248176   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:36:57.248205   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:57.317299   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:36:57.317323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:36:59.874514   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:36:59.891139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:36:59.891214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:36:59.932278   59621 cri.go:89] found id: ""
	I0319 20:36:59.932310   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.932317   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:36:59.932323   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:36:59.932367   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:36:59.972661   59621 cri.go:89] found id: ""
	I0319 20:36:59.972686   59621 logs.go:276] 0 containers: []
	W0319 20:36:59.972695   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:36:59.972701   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:36:59.972760   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:00.014564   59621 cri.go:89] found id: ""
	I0319 20:37:00.014593   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.014603   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:00.014608   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:00.014656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:00.058917   59621 cri.go:89] found id: ""
	I0319 20:37:00.058946   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.058954   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:00.058959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:00.059015   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:00.104115   59621 cri.go:89] found id: ""
	I0319 20:37:00.104141   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.104150   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:00.104155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:00.104208   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:00.149115   59621 cri.go:89] found id: ""
	I0319 20:37:00.149143   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.149154   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:00.149167   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:00.149225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:00.190572   59621 cri.go:89] found id: ""
	I0319 20:37:00.190604   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.190614   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:00.190622   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:00.190683   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:00.231921   59621 cri.go:89] found id: ""
	I0319 20:37:00.231948   59621 logs.go:276] 0 containers: []
	W0319 20:37:00.231955   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:00.231962   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:00.231975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:00.286508   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:00.286537   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:00.302245   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:00.302269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:00.381248   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:00.381272   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:00.381284   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:00.471314   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:00.471371   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:36:58.471804   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.478113   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:36:58.663274   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:00.663336   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.663834   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:02.588018   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.087994   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:03.018286   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:03.033152   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:03.033209   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:03.098449   59621 cri.go:89] found id: ""
	I0319 20:37:03.098471   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.098481   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:03.098488   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:03.098547   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:03.141297   59621 cri.go:89] found id: ""
	I0319 20:37:03.141323   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.141340   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:03.141346   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:03.141404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:03.184335   59621 cri.go:89] found id: ""
	I0319 20:37:03.184357   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.184365   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:03.184371   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:03.184417   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:03.224814   59621 cri.go:89] found id: ""
	I0319 20:37:03.224838   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.224849   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:03.224860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:03.224918   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:03.264229   59621 cri.go:89] found id: ""
	I0319 20:37:03.264267   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.264278   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:03.264286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:03.264346   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:03.303743   59621 cri.go:89] found id: ""
	I0319 20:37:03.303772   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.303783   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:03.303790   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:03.303840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:03.345347   59621 cri.go:89] found id: ""
	I0319 20:37:03.345373   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.345380   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:03.345386   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:03.345440   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:03.386906   59621 cri.go:89] found id: ""
	I0319 20:37:03.386934   59621 logs.go:276] 0 containers: []
	W0319 20:37:03.386948   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:03.386958   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:03.386976   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:03.474324   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:03.474361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:03.521459   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:03.521495   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:03.574441   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:03.574470   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:03.590780   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:03.590805   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:03.671256   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:06.171764   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:06.187170   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:06.187238   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:02.973736   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.471180   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:05.161734   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.161995   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:07.091895   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.588324   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:06.229517   59621 cri.go:89] found id: ""
	I0319 20:37:06.229541   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.229548   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:06.229555   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:06.229620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:06.267306   59621 cri.go:89] found id: ""
	I0319 20:37:06.267332   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.267343   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:06.267350   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:06.267407   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:06.305231   59621 cri.go:89] found id: ""
	I0319 20:37:06.305258   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.305268   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:06.305275   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:06.305338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:06.346025   59621 cri.go:89] found id: ""
	I0319 20:37:06.346049   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.346060   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:06.346068   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:06.346131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:06.386092   59621 cri.go:89] found id: ""
	I0319 20:37:06.386120   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.386131   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:06.386139   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:06.386193   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:06.424216   59621 cri.go:89] found id: ""
	I0319 20:37:06.424251   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.424270   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:06.424278   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:06.424331   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:06.461840   59621 cri.go:89] found id: ""
	I0319 20:37:06.461876   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.461885   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:06.461891   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:06.461939   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:06.502528   59621 cri.go:89] found id: ""
	I0319 20:37:06.502553   59621 logs.go:276] 0 containers: []
	W0319 20:37:06.502561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:06.502584   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:06.502595   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:06.582900   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:06.582930   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:06.630957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:06.630985   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:06.685459   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:06.685485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:06.700919   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:06.700942   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:06.789656   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.290427   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:09.305199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:09.305265   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:09.347745   59621 cri.go:89] found id: ""
	I0319 20:37:09.347769   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.347781   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:09.347788   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:09.347845   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:09.388589   59621 cri.go:89] found id: ""
	I0319 20:37:09.388619   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.388629   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:09.388636   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:09.388696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:09.425127   59621 cri.go:89] found id: ""
	I0319 20:37:09.425148   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.425156   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:09.425161   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:09.425205   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:09.467418   59621 cri.go:89] found id: ""
	I0319 20:37:09.467440   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.467450   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:09.467458   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:09.467520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:09.509276   59621 cri.go:89] found id: ""
	I0319 20:37:09.509309   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.509320   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:09.509327   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:09.509387   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:09.548894   59621 cri.go:89] found id: ""
	I0319 20:37:09.548918   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.548925   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:09.548931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:09.548991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:09.592314   59621 cri.go:89] found id: ""
	I0319 20:37:09.592333   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.592339   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:09.592344   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:09.592390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:09.632916   59621 cri.go:89] found id: ""
	I0319 20:37:09.632943   59621 logs.go:276] 0 containers: []
	W0319 20:37:09.632954   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:09.632965   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:09.632981   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:09.687835   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:09.687870   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:09.706060   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:09.706085   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:09.819536   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:09.819578   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:09.819594   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:09.904891   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:09.904925   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:07.971754   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.974080   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:09.162947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:11.661800   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.088585   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.588430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:12.452940   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:12.469099   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:12.469177   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:12.512819   59621 cri.go:89] found id: ""
	I0319 20:37:12.512842   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.512849   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:12.512855   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:12.512911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:12.551109   59621 cri.go:89] found id: ""
	I0319 20:37:12.551136   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.551143   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:12.551149   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:12.551225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:12.591217   59621 cri.go:89] found id: ""
	I0319 20:37:12.591241   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.591247   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:12.591253   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:12.591298   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:12.629877   59621 cri.go:89] found id: ""
	I0319 20:37:12.629905   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.629914   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:12.629922   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:12.629984   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:12.668363   59621 cri.go:89] found id: ""
	I0319 20:37:12.668390   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.668400   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:12.668406   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:12.668461   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:12.713340   59621 cri.go:89] found id: ""
	I0319 20:37:12.713366   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.713373   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:12.713379   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:12.713425   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:12.757275   59621 cri.go:89] found id: ""
	I0319 20:37:12.757302   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.757311   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:12.757316   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:12.757362   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:12.795143   59621 cri.go:89] found id: ""
	I0319 20:37:12.795173   59621 logs.go:276] 0 containers: []
	W0319 20:37:12.795182   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:12.795200   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:12.795213   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:12.883721   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:12.883743   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:12.883757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:12.970748   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:12.970777   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:13.015874   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:13.015922   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:13.071394   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:13.071427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:15.587386   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:15.602477   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:15.602553   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:15.645784   59621 cri.go:89] found id: ""
	I0319 20:37:15.645815   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.645826   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:15.645834   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:15.645897   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:15.689264   59621 cri.go:89] found id: ""
	I0319 20:37:15.689293   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.689313   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:15.689321   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:15.689390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:15.730712   59621 cri.go:89] found id: ""
	I0319 20:37:15.730795   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.730812   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:15.730819   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:15.730891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:15.779077   59621 cri.go:89] found id: ""
	I0319 20:37:15.779108   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.779120   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:15.779128   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:15.779182   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:15.824212   59621 cri.go:89] found id: ""
	I0319 20:37:15.824240   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.824251   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:15.824273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:15.824335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:15.871111   59621 cri.go:89] found id: ""
	I0319 20:37:15.871140   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.871147   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:15.871153   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:15.871229   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:15.922041   59621 cri.go:89] found id: ""
	I0319 20:37:15.922068   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.922078   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:15.922086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:15.922144   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:15.964956   59621 cri.go:89] found id: ""
	I0319 20:37:15.964977   59621 logs.go:276] 0 containers: []
	W0319 20:37:15.964987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:15.964998   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:15.965013   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:16.039416   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:16.039439   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:16.039455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:16.121059   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:16.121088   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.169892   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:16.169918   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:12.475641   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:14.971849   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:13.662232   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:15.663770   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.588577   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.590602   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:16.225856   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:16.225894   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:18.741707   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:18.757601   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:18.757669   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:18.795852   59621 cri.go:89] found id: ""
	I0319 20:37:18.795892   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.795903   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:18.795909   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:18.795973   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:18.835782   59621 cri.go:89] found id: ""
	I0319 20:37:18.835809   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.835817   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:18.835822   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:18.835882   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:18.876330   59621 cri.go:89] found id: ""
	I0319 20:37:18.876353   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.876361   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:18.876366   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:18.876421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:18.920159   59621 cri.go:89] found id: ""
	I0319 20:37:18.920187   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.920198   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:18.920205   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:18.920278   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:18.959461   59621 cri.go:89] found id: ""
	I0319 20:37:18.959480   59621 logs.go:276] 0 containers: []
	W0319 20:37:18.959487   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:18.959492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:18.959551   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:19.001193   59621 cri.go:89] found id: ""
	I0319 20:37:19.001218   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.001226   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:19.001232   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:19.001288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:19.040967   59621 cri.go:89] found id: ""
	I0319 20:37:19.040995   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.041006   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:19.041013   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:19.041077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:19.085490   59621 cri.go:89] found id: ""
	I0319 20:37:19.085516   59621 logs.go:276] 0 containers: []
	W0319 20:37:19.085525   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:19.085534   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:19.085547   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:19.140829   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:19.140861   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:19.156032   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:19.156054   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:19.241687   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:19.241714   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:19.241726   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:19.321710   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:19.321762   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:16.972091   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.972471   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.473526   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:18.161717   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:20.166272   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:22.661804   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.088608   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:23.587236   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:21.867596   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:21.882592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:21.882673   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:21.925555   59621 cri.go:89] found id: ""
	I0319 20:37:21.925580   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.925590   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:21.925598   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:21.925656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:21.970483   59621 cri.go:89] found id: ""
	I0319 20:37:21.970511   59621 logs.go:276] 0 containers: []
	W0319 20:37:21.970522   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:21.970529   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:21.970594   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:22.009908   59621 cri.go:89] found id: ""
	I0319 20:37:22.009934   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.009945   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:22.009960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:22.010029   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:22.050470   59621 cri.go:89] found id: ""
	I0319 20:37:22.050496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.050506   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:22.050513   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:22.050576   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:22.094091   59621 cri.go:89] found id: ""
	I0319 20:37:22.094116   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.094127   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:22.094135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:22.094192   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:22.134176   59621 cri.go:89] found id: ""
	I0319 20:37:22.134205   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.134224   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:22.134233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:22.134294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:22.178455   59621 cri.go:89] found id: ""
	I0319 20:37:22.178496   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.178506   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:22.178512   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:22.178568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:22.222432   59621 cri.go:89] found id: ""
	I0319 20:37:22.222461   59621 logs.go:276] 0 containers: []
	W0319 20:37:22.222472   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:22.222482   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:22.222497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:22.270957   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:22.270992   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:22.324425   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:22.324457   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:22.340463   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:22.340492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:22.418833   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:22.418854   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:22.418869   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.003905   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:25.019917   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:25.019991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:25.060609   59621 cri.go:89] found id: ""
	I0319 20:37:25.060631   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.060639   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:25.060645   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:25.060699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:25.099387   59621 cri.go:89] found id: ""
	I0319 20:37:25.099412   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.099422   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:25.099427   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:25.099470   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:25.141437   59621 cri.go:89] found id: ""
	I0319 20:37:25.141465   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.141475   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:25.141482   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:25.141540   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:25.184195   59621 cri.go:89] found id: ""
	I0319 20:37:25.184221   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.184232   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:25.184239   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:25.184312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:25.224811   59621 cri.go:89] found id: ""
	I0319 20:37:25.224833   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.224843   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:25.224851   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:25.224911   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:25.263238   59621 cri.go:89] found id: ""
	I0319 20:37:25.263259   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.263267   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:25.263273   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:25.263319   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:25.304355   59621 cri.go:89] found id: ""
	I0319 20:37:25.304380   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.304390   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:25.304397   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:25.304454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:25.345916   59621 cri.go:89] found id: ""
	I0319 20:37:25.345941   59621 logs.go:276] 0 containers: []
	W0319 20:37:25.345952   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:25.345961   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:25.345975   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:25.433812   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:25.433854   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:25.477733   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:25.477757   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:25.532792   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:25.532831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:25.548494   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:25.548527   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:25.627571   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:23.975755   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.472094   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:24.663592   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:26.664475   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:25.589800   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.087868   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.088398   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:28.128120   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:28.142930   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:28.142989   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:28.181365   59621 cri.go:89] found id: ""
	I0319 20:37:28.181391   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.181399   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:28.181405   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:28.181460   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:28.221909   59621 cri.go:89] found id: ""
	I0319 20:37:28.221936   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.221946   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:28.221954   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:28.222013   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:28.263075   59621 cri.go:89] found id: ""
	I0319 20:37:28.263103   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.263114   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:28.263121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:28.263175   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:28.302083   59621 cri.go:89] found id: ""
	I0319 20:37:28.302111   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.302121   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:28.302131   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:28.302189   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:28.343223   59621 cri.go:89] found id: ""
	I0319 20:37:28.343253   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.343264   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:28.343286   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:28.343354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:28.379936   59621 cri.go:89] found id: ""
	I0319 20:37:28.379966   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.379977   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:28.379984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:28.380038   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:28.418232   59621 cri.go:89] found id: ""
	I0319 20:37:28.418262   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.418272   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:28.418280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:28.418339   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:28.455238   59621 cri.go:89] found id: ""
	I0319 20:37:28.455265   59621 logs.go:276] 0 containers: []
	W0319 20:37:28.455275   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:28.455286   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:28.455302   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:28.501253   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:28.501281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:28.555968   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:28.555998   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:28.570136   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:28.570158   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:28.650756   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:28.650784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:28.650798   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:28.472705   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:30.972037   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:29.162647   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.662382   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:32.088569   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.587686   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:31.229149   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:31.246493   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:31.246567   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:31.286900   59621 cri.go:89] found id: ""
	I0319 20:37:31.286925   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.286937   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:31.286944   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:31.286997   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:31.331795   59621 cri.go:89] found id: ""
	I0319 20:37:31.331825   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.331836   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:31.331844   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:31.331910   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:31.371871   59621 cri.go:89] found id: ""
	I0319 20:37:31.371901   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.371911   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:31.371919   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:31.371975   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:31.414086   59621 cri.go:89] found id: ""
	I0319 20:37:31.414110   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.414118   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:31.414123   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:31.414178   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:31.455552   59621 cri.go:89] found id: ""
	I0319 20:37:31.455580   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.455590   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:31.455597   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:31.455659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:31.497280   59621 cri.go:89] found id: ""
	I0319 20:37:31.497309   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.497320   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:31.497328   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:31.497395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:31.539224   59621 cri.go:89] found id: ""
	I0319 20:37:31.539247   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.539255   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:31.539260   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:31.539315   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:31.575381   59621 cri.go:89] found id: ""
	I0319 20:37:31.575404   59621 logs.go:276] 0 containers: []
	W0319 20:37:31.575411   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:31.575419   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:31.575431   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:31.629018   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:31.629051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:31.644588   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:31.644612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:31.723533   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:31.723563   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:31.723578   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:31.806720   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:31.806747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.354387   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:34.368799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:34.368861   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:34.409945   59621 cri.go:89] found id: ""
	I0319 20:37:34.409978   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.409989   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:34.409996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:34.410044   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:34.452971   59621 cri.go:89] found id: ""
	I0319 20:37:34.452993   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.453001   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:34.453014   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:34.453077   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:34.492851   59621 cri.go:89] found id: ""
	I0319 20:37:34.492875   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.492886   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:34.492892   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:34.492937   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:34.532430   59621 cri.go:89] found id: ""
	I0319 20:37:34.532462   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.532473   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:34.532481   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:34.532539   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:34.571800   59621 cri.go:89] found id: ""
	I0319 20:37:34.571827   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.571835   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:34.571840   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:34.571907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:34.610393   59621 cri.go:89] found id: ""
	I0319 20:37:34.610429   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.610439   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:34.610448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:34.610508   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:34.655214   59621 cri.go:89] found id: ""
	I0319 20:37:34.655241   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.655249   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:34.655254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:34.655303   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:34.698153   59621 cri.go:89] found id: ""
	I0319 20:37:34.698175   59621 logs.go:276] 0 containers: []
	W0319 20:37:34.698183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:34.698191   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:34.698201   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:34.748573   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:34.748608   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:34.810533   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:34.810567   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:34.829479   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:34.829507   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:34.903279   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:34.903300   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:34.903311   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:32.972676   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:35.471024   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:34.161665   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.169093   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:36.587810   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.590891   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:37.490820   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:37.505825   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:37.505887   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:37.544829   59621 cri.go:89] found id: ""
	I0319 20:37:37.544857   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.544864   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:37.544870   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:37.544925   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:37.589947   59621 cri.go:89] found id: ""
	I0319 20:37:37.589968   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.589975   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:37.589981   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:37.590028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:37.632290   59621 cri.go:89] found id: ""
	I0319 20:37:37.632321   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.632332   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:37.632340   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:37.632403   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:37.673984   59621 cri.go:89] found id: ""
	I0319 20:37:37.674014   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.674024   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:37.674032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:37.674090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:37.717001   59621 cri.go:89] found id: ""
	I0319 20:37:37.717024   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.717032   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:37.717039   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:37.717085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:37.758611   59621 cri.go:89] found id: ""
	I0319 20:37:37.758633   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.758640   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:37.758646   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:37.758696   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:37.815024   59621 cri.go:89] found id: ""
	I0319 20:37:37.815051   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.815062   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:37.815071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:37.815133   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:37.859084   59621 cri.go:89] found id: ""
	I0319 20:37:37.859115   59621 logs.go:276] 0 containers: []
	W0319 20:37:37.859122   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:37.859130   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:37.859147   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:37.936822   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:37.936850   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:37.936867   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:38.020612   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:38.020645   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:38.065216   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:38.065299   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:38.119158   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:38.119189   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:40.636672   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:40.651709   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:40.651775   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:40.694782   59621 cri.go:89] found id: ""
	I0319 20:37:40.694803   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.694810   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:40.694815   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:40.694859   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:40.733989   59621 cri.go:89] found id: ""
	I0319 20:37:40.734017   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.734027   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:40.734034   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:40.734097   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:40.777269   59621 cri.go:89] found id: ""
	I0319 20:37:40.777293   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.777300   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:40.777307   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:40.777365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:40.815643   59621 cri.go:89] found id: ""
	I0319 20:37:40.815679   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.815689   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:40.815696   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:40.815761   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:40.856536   59621 cri.go:89] found id: ""
	I0319 20:37:40.856565   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.856576   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:40.856584   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:40.856641   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:40.897772   59621 cri.go:89] found id: ""
	I0319 20:37:40.897795   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.897802   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:40.897808   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:40.897853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:40.939911   59621 cri.go:89] found id: ""
	I0319 20:37:40.939947   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.939960   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:40.939969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:40.940033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:40.979523   59621 cri.go:89] found id: ""
	I0319 20:37:40.979551   59621 logs.go:276] 0 containers: []
	W0319 20:37:40.979561   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:40.979571   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:40.979586   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.037172   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:41.037207   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:41.054212   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:41.054239   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:41.129744   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:41.129773   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:41.129789   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:41.208752   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:41.208784   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:37.472396   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:39.472831   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:38.662719   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:40.663337   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:41.088396   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.089545   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.755123   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:43.771047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:43.771116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:43.819672   59621 cri.go:89] found id: ""
	I0319 20:37:43.819707   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.819718   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:43.819727   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:43.819788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:43.859306   59621 cri.go:89] found id: ""
	I0319 20:37:43.859337   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.859348   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:43.859354   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:43.859404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:43.901053   59621 cri.go:89] found id: ""
	I0319 20:37:43.901073   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.901080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:43.901086   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:43.901137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:43.942724   59621 cri.go:89] found id: ""
	I0319 20:37:43.942750   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.942761   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:43.942768   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:43.942822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:43.985993   59621 cri.go:89] found id: ""
	I0319 20:37:43.986020   59621 logs.go:276] 0 containers: []
	W0319 20:37:43.986030   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:43.986038   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:43.986089   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:44.026452   59621 cri.go:89] found id: ""
	I0319 20:37:44.026480   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.026497   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:44.026506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:44.026601   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:44.066210   59621 cri.go:89] found id: ""
	I0319 20:37:44.066235   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.066245   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:44.066252   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:44.066305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:44.105778   59621 cri.go:89] found id: ""
	I0319 20:37:44.105801   59621 logs.go:276] 0 containers: []
	W0319 20:37:44.105807   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:44.105815   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:44.105826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:44.121641   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:44.121670   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:44.206723   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:44.206750   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:44.206765   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:44.295840   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:44.295874   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:44.345991   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:44.346029   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:41.972560   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:44.471857   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:43.162059   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.163324   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:47.662016   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:45.588501   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:48.087736   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:50.091413   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:46.902540   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:46.918932   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:46.919001   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:46.960148   59621 cri.go:89] found id: ""
	I0319 20:37:46.960179   59621 logs.go:276] 0 containers: []
	W0319 20:37:46.960189   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:46.960197   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:46.960280   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:47.002527   59621 cri.go:89] found id: ""
	I0319 20:37:47.002551   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.002558   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:47.002563   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:47.002634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:47.047911   59621 cri.go:89] found id: ""
	I0319 20:37:47.047935   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.047944   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:47.047950   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:47.047995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:47.085044   59621 cri.go:89] found id: ""
	I0319 20:37:47.085078   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.085085   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:47.085092   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:47.085160   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:47.127426   59621 cri.go:89] found id: ""
	I0319 20:37:47.127452   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.127463   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:47.127470   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:47.127531   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:47.171086   59621 cri.go:89] found id: ""
	I0319 20:37:47.171112   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.171122   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:47.171130   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:47.171185   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:47.209576   59621 cri.go:89] found id: ""
	I0319 20:37:47.209600   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.209607   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:47.209614   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:47.209674   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:47.245131   59621 cri.go:89] found id: ""
	I0319 20:37:47.245153   59621 logs.go:276] 0 containers: []
	W0319 20:37:47.245159   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:47.245167   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:47.245176   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:47.301454   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:47.301485   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:47.317445   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:47.317468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:47.399753   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:47.399777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:47.399793   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:47.487933   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:47.487965   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.032753   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:50.050716   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:50.050790   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:50.106124   59621 cri.go:89] found id: ""
	I0319 20:37:50.106143   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.106151   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:50.106157   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:50.106210   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:50.172653   59621 cri.go:89] found id: ""
	I0319 20:37:50.172673   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.172680   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:50.172685   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:50.172741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:50.222214   59621 cri.go:89] found id: ""
	I0319 20:37:50.222234   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.222242   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:50.222247   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:50.222291   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:50.266299   59621 cri.go:89] found id: ""
	I0319 20:37:50.266325   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.266335   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:50.266341   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:50.266386   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:50.307464   59621 cri.go:89] found id: ""
	I0319 20:37:50.307496   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.307518   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:50.307524   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:50.307583   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:50.348063   59621 cri.go:89] found id: ""
	I0319 20:37:50.348090   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.348100   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:50.348107   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:50.348169   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:50.387014   59621 cri.go:89] found id: ""
	I0319 20:37:50.387037   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.387044   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:50.387049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:50.387095   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:50.428073   59621 cri.go:89] found id: ""
	I0319 20:37:50.428096   59621 logs.go:276] 0 containers: []
	W0319 20:37:50.428104   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:50.428112   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:50.428122   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:50.510293   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:50.510323   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:50.553730   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:50.553769   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:50.609778   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:50.609806   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:50.625688   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:50.625718   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:50.700233   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:46.972679   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.473552   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:49.665655   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.164565   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:52.587562   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.587929   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:53.200807   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:53.218047   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:53.218116   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:53.258057   59621 cri.go:89] found id: ""
	I0319 20:37:53.258087   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.258095   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:53.258100   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:53.258150   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:53.297104   59621 cri.go:89] found id: ""
	I0319 20:37:53.297127   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.297135   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:53.297140   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:53.297198   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:53.338128   59621 cri.go:89] found id: ""
	I0319 20:37:53.338158   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.338172   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:53.338180   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:53.338244   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:53.380527   59621 cri.go:89] found id: ""
	I0319 20:37:53.380554   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.380564   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:53.380571   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:53.380630   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:53.427289   59621 cri.go:89] found id: ""
	I0319 20:37:53.427319   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.427331   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:53.427338   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:53.427393   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:53.474190   59621 cri.go:89] found id: ""
	I0319 20:37:53.474215   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.474225   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:53.474233   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:53.474288   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:53.518506   59621 cri.go:89] found id: ""
	I0319 20:37:53.518534   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.518545   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:53.518560   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:53.518620   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:53.563288   59621 cri.go:89] found id: ""
	I0319 20:37:53.563316   59621 logs.go:276] 0 containers: []
	W0319 20:37:53.563342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:53.563354   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:53.563374   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:53.577963   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:53.577991   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:53.662801   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:53.662820   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:53.662830   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:53.745524   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:53.745553   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:53.803723   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:53.803759   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:51.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.471542   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.472616   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:54.663037   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.666932   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.588855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.087276   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:56.353791   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:56.367898   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:56.367962   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:56.406800   59621 cri.go:89] found id: ""
	I0319 20:37:56.406826   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.406835   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:56.406843   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:56.406908   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:56.449365   59621 cri.go:89] found id: ""
	I0319 20:37:56.449402   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.449423   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:56.449437   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:56.449494   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:56.489273   59621 cri.go:89] found id: ""
	I0319 20:37:56.489299   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.489307   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:56.489313   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:56.489368   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:56.529681   59621 cri.go:89] found id: ""
	I0319 20:37:56.529710   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.529721   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:56.529727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:56.529791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:56.568751   59621 cri.go:89] found id: ""
	I0319 20:37:56.568777   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.568785   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:56.568791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:56.568840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:56.608197   59621 cri.go:89] found id: ""
	I0319 20:37:56.608221   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.608229   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:56.608235   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:56.608300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:56.647000   59621 cri.go:89] found id: ""
	I0319 20:37:56.647027   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.647034   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:56.647045   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:56.647102   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:56.695268   59621 cri.go:89] found id: ""
	I0319 20:37:56.695302   59621 logs.go:276] 0 containers: []
	W0319 20:37:56.695313   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:56.695324   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:56.695337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:56.751129   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:56.751162   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:56.766878   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:56.766900   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:56.844477   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:56.844504   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:56.844520   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:37:56.927226   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:37:56.927272   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:59.477876   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:37:59.492999   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:37:59.493052   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:37:59.530899   59621 cri.go:89] found id: ""
	I0319 20:37:59.530929   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.530940   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:37:59.530947   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:37:59.531004   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:37:59.572646   59621 cri.go:89] found id: ""
	I0319 20:37:59.572675   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.572684   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:37:59.572692   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:37:59.572755   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:37:59.612049   59621 cri.go:89] found id: ""
	I0319 20:37:59.612073   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.612080   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:37:59.612085   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:37:59.612131   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:37:59.656193   59621 cri.go:89] found id: ""
	I0319 20:37:59.656232   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.656243   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:37:59.656254   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:37:59.656335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:37:59.698406   59621 cri.go:89] found id: ""
	I0319 20:37:59.698429   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.698437   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:37:59.698442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:37:59.698491   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:37:59.743393   59621 cri.go:89] found id: ""
	I0319 20:37:59.743426   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.743457   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:37:59.743465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:37:59.743524   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:37:59.790673   59621 cri.go:89] found id: ""
	I0319 20:37:59.790701   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.790712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:37:59.790720   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:37:59.790780   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:37:59.832311   59621 cri.go:89] found id: ""
	I0319 20:37:59.832342   59621 logs.go:276] 0 containers: []
	W0319 20:37:59.832359   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:37:59.832368   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:37:59.832380   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:37:59.887229   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:37:59.887261   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:37:59.903258   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:37:59.903281   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:37:59.989337   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:37:59.989373   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:37:59.989387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:00.066102   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:00.066136   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:37:58.971607   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.474225   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:37:59.165581   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.169140   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:01.087715   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.092449   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:02.610568   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:02.625745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:02.625804   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:02.669944   59621 cri.go:89] found id: ""
	I0319 20:38:02.669973   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.669983   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:02.669990   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:02.670048   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:02.710157   59621 cri.go:89] found id: ""
	I0319 20:38:02.710181   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.710190   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:02.710195   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:02.710251   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:02.750930   59621 cri.go:89] found id: ""
	I0319 20:38:02.750960   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.750969   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:02.750975   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:02.751033   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:02.790449   59621 cri.go:89] found id: ""
	I0319 20:38:02.790480   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.790491   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:02.790499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:02.790552   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:02.827675   59621 cri.go:89] found id: ""
	I0319 20:38:02.827709   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.827720   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:02.827727   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:02.827777   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:02.871145   59621 cri.go:89] found id: ""
	I0319 20:38:02.871180   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.871190   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:02.871199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:02.871282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:02.912050   59621 cri.go:89] found id: ""
	I0319 20:38:02.912079   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.912088   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:02.912094   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:02.912152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:02.952094   59621 cri.go:89] found id: ""
	I0319 20:38:02.952123   59621 logs.go:276] 0 containers: []
	W0319 20:38:02.952135   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:02.952146   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:02.952161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:03.031768   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:03.031788   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:03.031800   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.109464   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:03.109492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:03.154111   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:03.154138   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:03.210523   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:03.210556   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:05.727297   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:05.741423   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:05.741487   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:05.781351   59621 cri.go:89] found id: ""
	I0319 20:38:05.781380   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.781389   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:05.781396   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:05.781453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:05.822041   59621 cri.go:89] found id: ""
	I0319 20:38:05.822074   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.822086   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:05.822093   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:05.822149   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:05.861636   59621 cri.go:89] found id: ""
	I0319 20:38:05.861669   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.861680   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:05.861686   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:05.861734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:05.901024   59621 cri.go:89] found id: ""
	I0319 20:38:05.901053   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.901061   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:05.901067   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:05.901127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:05.948404   59621 cri.go:89] found id: ""
	I0319 20:38:05.948436   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.948447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:05.948455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:05.948515   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:05.992787   59621 cri.go:89] found id: ""
	I0319 20:38:05.992813   59621 logs.go:276] 0 containers: []
	W0319 20:38:05.992824   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:05.992832   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:05.992891   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:06.032206   59621 cri.go:89] found id: ""
	I0319 20:38:06.032243   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.032251   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:06.032283   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:06.032343   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:06.071326   59621 cri.go:89] found id: ""
	I0319 20:38:06.071361   59621 logs.go:276] 0 containers: []
	W0319 20:38:06.071371   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:06.071381   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:06.071397   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:06.149825   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:06.149848   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:06.149863   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:03.972924   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.473336   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:03.665054   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.666413   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:05.588698   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.087857   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.088761   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:06.230078   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:06.230110   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:06.280626   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:06.280652   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:06.331398   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:06.331427   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:08.847443   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:08.862412   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:08.862480   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:08.902793   59621 cri.go:89] found id: ""
	I0319 20:38:08.902815   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.902823   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:08.902828   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:08.902884   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:08.942713   59621 cri.go:89] found id: ""
	I0319 20:38:08.942742   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.942753   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:08.942759   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:08.942817   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:08.987319   59621 cri.go:89] found id: ""
	I0319 20:38:08.987342   59621 logs.go:276] 0 containers: []
	W0319 20:38:08.987349   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:08.987355   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:08.987420   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:09.026583   59621 cri.go:89] found id: ""
	I0319 20:38:09.026608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.026619   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:09.026626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:09.026699   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:09.065227   59621 cri.go:89] found id: ""
	I0319 20:38:09.065252   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.065262   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:09.065269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:09.065347   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:09.114595   59621 cri.go:89] found id: ""
	I0319 20:38:09.114618   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.114627   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:09.114636   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:09.114694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:09.160110   59621 cri.go:89] found id: ""
	I0319 20:38:09.160137   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.160147   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:09.160155   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:09.160214   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:09.205580   59621 cri.go:89] found id: ""
	I0319 20:38:09.205608   59621 logs.go:276] 0 containers: []
	W0319 20:38:09.205616   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:09.205626   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:09.205641   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:09.253361   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:09.253389   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:09.310537   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:09.310571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:09.326404   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:09.326430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:09.406469   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:09.406489   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:09.406517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:08.475109   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.973956   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:08.162101   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:10.663715   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:12.588671   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.088453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:11.987711   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:12.002868   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:12.002934   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:12.041214   59621 cri.go:89] found id: ""
	I0319 20:38:12.041237   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.041244   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:12.041249   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:12.041311   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:12.079094   59621 cri.go:89] found id: ""
	I0319 20:38:12.079116   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.079123   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:12.079128   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:12.079176   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:12.117249   59621 cri.go:89] found id: ""
	I0319 20:38:12.117272   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.117280   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:12.117285   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:12.117341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:12.157075   59621 cri.go:89] found id: ""
	I0319 20:38:12.157103   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.157114   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:12.157121   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:12.157183   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:12.196104   59621 cri.go:89] found id: ""
	I0319 20:38:12.196131   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.196141   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:12.196149   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:12.196199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:12.238149   59621 cri.go:89] found id: ""
	I0319 20:38:12.238175   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.238186   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:12.238193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:12.238252   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:12.277745   59621 cri.go:89] found id: ""
	I0319 20:38:12.277770   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.277785   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:12.277791   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:12.277848   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:12.318055   59621 cri.go:89] found id: ""
	I0319 20:38:12.318081   59621 logs.go:276] 0 containers: []
	W0319 20:38:12.318091   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:12.318103   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:12.318121   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:12.371317   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:12.371347   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:12.387230   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:12.387258   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:12.466237   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:12.466269   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:12.466287   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:12.555890   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:12.555928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.106594   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:15.120606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:15.120678   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:15.160532   59621 cri.go:89] found id: ""
	I0319 20:38:15.160559   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.160568   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:15.160575   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:15.160632   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:15.200201   59621 cri.go:89] found id: ""
	I0319 20:38:15.200228   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.200238   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:15.200245   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:15.200320   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:15.239140   59621 cri.go:89] found id: ""
	I0319 20:38:15.239172   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.239184   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:15.239192   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:15.239257   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:15.278798   59621 cri.go:89] found id: ""
	I0319 20:38:15.278823   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.278834   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:15.278842   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:15.278919   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:15.318457   59621 cri.go:89] found id: ""
	I0319 20:38:15.318488   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.318498   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:15.318506   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:15.318557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:15.359186   59621 cri.go:89] found id: ""
	I0319 20:38:15.359215   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.359222   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:15.359229   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:15.359290   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:15.395350   59621 cri.go:89] found id: ""
	I0319 20:38:15.395374   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.395384   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:15.395391   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:15.395456   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:15.435786   59621 cri.go:89] found id: ""
	I0319 20:38:15.435811   59621 logs.go:276] 0 containers: []
	W0319 20:38:15.435821   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:15.435834   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:15.435851   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:15.515007   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:15.515050   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:15.567341   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:15.567379   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:15.621949   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:15.621978   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:15.637981   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:15.638009   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:15.714146   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:13.473479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.971583   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:13.162747   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:15.163005   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.662157   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:17.587779   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:19.588889   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:18.214600   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:18.230287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:18.230357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:18.268741   59621 cri.go:89] found id: ""
	I0319 20:38:18.268765   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.268773   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:18.268778   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:18.268822   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:18.339026   59621 cri.go:89] found id: ""
	I0319 20:38:18.339054   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.339064   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:18.339071   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:18.339127   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:18.378567   59621 cri.go:89] found id: ""
	I0319 20:38:18.378594   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.378604   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:18.378613   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:18.378690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:18.414882   59621 cri.go:89] found id: ""
	I0319 20:38:18.414914   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.414924   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:18.414931   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:18.414995   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:18.457981   59621 cri.go:89] found id: ""
	I0319 20:38:18.458010   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.458021   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:18.458028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:18.458085   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:18.498750   59621 cri.go:89] found id: ""
	I0319 20:38:18.498777   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.498788   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:18.498796   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:18.498840   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:18.538669   59621 cri.go:89] found id: ""
	I0319 20:38:18.538700   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.538712   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:18.538719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:18.538776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:18.578310   59621 cri.go:89] found id: ""
	I0319 20:38:18.578337   59621 logs.go:276] 0 containers: []
	W0319 20:38:18.578347   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:18.578359   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:18.578376   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:18.594433   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:18.594455   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:18.675488   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:18.675512   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:18.675528   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:18.753790   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:18.753826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:18.797794   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:18.797831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:18.473455   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.473644   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:20.162290   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:22.167423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.589226   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.090617   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:21.358212   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:21.372874   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:21.372951   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:21.412747   59621 cri.go:89] found id: ""
	I0319 20:38:21.412776   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.412786   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:21.412793   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:21.412853   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:21.454152   59621 cri.go:89] found id: ""
	I0319 20:38:21.454183   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.454192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:21.454199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:21.454260   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:21.495982   59621 cri.go:89] found id: ""
	I0319 20:38:21.496014   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.496025   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:21.496031   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:21.496096   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:21.537425   59621 cri.go:89] found id: ""
	I0319 20:38:21.537448   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.537455   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:21.537460   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:21.537522   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:21.577434   59621 cri.go:89] found id: ""
	I0319 20:38:21.577461   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.577468   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:21.577474   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:21.577523   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:21.622237   59621 cri.go:89] found id: ""
	I0319 20:38:21.622268   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.622280   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:21.622287   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:21.622341   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:21.671458   59621 cri.go:89] found id: ""
	I0319 20:38:21.671484   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.671495   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:21.671501   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:21.671549   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:21.712081   59621 cri.go:89] found id: ""
	I0319 20:38:21.712101   59621 logs.go:276] 0 containers: []
	W0319 20:38:21.712109   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:21.712119   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:21.712134   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:21.767093   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:21.767130   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:21.783272   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:21.783298   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:21.858398   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:21.858419   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:21.858430   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:21.938469   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:21.938505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:24.485373   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:24.499848   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:24.499902   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:24.539403   59621 cri.go:89] found id: ""
	I0319 20:38:24.539444   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.539454   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:24.539461   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:24.539520   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:24.581169   59621 cri.go:89] found id: ""
	I0319 20:38:24.581202   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.581212   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:24.581219   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:24.581272   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:24.627143   59621 cri.go:89] found id: ""
	I0319 20:38:24.627174   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.627186   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:24.627193   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:24.627253   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:24.675212   59621 cri.go:89] found id: ""
	I0319 20:38:24.675233   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.675239   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:24.675245   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:24.675312   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:24.728438   59621 cri.go:89] found id: ""
	I0319 20:38:24.728467   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.728477   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:24.728485   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:24.728542   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:24.799868   59621 cri.go:89] found id: ""
	I0319 20:38:24.799898   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.799907   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:24.799915   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:24.799977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:24.849805   59621 cri.go:89] found id: ""
	I0319 20:38:24.849859   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.849870   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:24.849878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:24.849949   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:24.891161   59621 cri.go:89] found id: ""
	I0319 20:38:24.891189   59621 logs.go:276] 0 containers: []
	W0319 20:38:24.891200   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:24.891210   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:24.891224   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:24.965356   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:24.965384   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:24.965401   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:25.042783   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:25.042821   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:25.088893   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:25.088917   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:25.143715   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:25.143755   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:22.473728   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.971753   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:24.663722   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.665702   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:26.589574   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.088379   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:27.662847   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:27.677323   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:27.677405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:27.714869   59621 cri.go:89] found id: ""
	I0319 20:38:27.714890   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.714897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:27.714902   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:27.714946   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:27.754613   59621 cri.go:89] found id: ""
	I0319 20:38:27.754639   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.754647   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:27.754654   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:27.754709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:27.793266   59621 cri.go:89] found id: ""
	I0319 20:38:27.793296   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.793303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:27.793309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:27.793356   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:27.835313   59621 cri.go:89] found id: ""
	I0319 20:38:27.835337   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.835344   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:27.835351   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:27.835404   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:27.873516   59621 cri.go:89] found id: ""
	I0319 20:38:27.873540   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.873547   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:27.873552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:27.873612   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:27.916165   59621 cri.go:89] found id: ""
	I0319 20:38:27.916193   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.916205   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:27.916212   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:27.916282   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:27.954863   59621 cri.go:89] found id: ""
	I0319 20:38:27.954893   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.954900   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:27.954907   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:27.954959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:27.995502   59621 cri.go:89] found id: ""
	I0319 20:38:27.995524   59621 logs.go:276] 0 containers: []
	W0319 20:38:27.995531   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:27.995538   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:27.995549   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:28.070516   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:28.070535   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:28.070546   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:28.155731   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:28.155771   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:28.199776   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:28.199804   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:28.254958   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:28.254987   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:30.771006   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:30.784806   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:30.784873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:30.820180   59621 cri.go:89] found id: ""
	I0319 20:38:30.820206   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.820216   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:30.820223   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:30.820300   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:30.860938   59621 cri.go:89] found id: ""
	I0319 20:38:30.860970   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.860981   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:30.860990   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:30.861046   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:30.899114   59621 cri.go:89] found id: ""
	I0319 20:38:30.899138   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.899145   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:30.899151   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:30.899207   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:30.936909   59621 cri.go:89] found id: ""
	I0319 20:38:30.936942   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.936953   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:30.936960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:30.937020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:30.977368   59621 cri.go:89] found id: ""
	I0319 20:38:30.977399   59621 logs.go:276] 0 containers: []
	W0319 20:38:30.977409   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:30.977419   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:30.977510   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:31.015468   59621 cri.go:89] found id: ""
	I0319 20:38:31.015497   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.015507   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:31.015515   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:31.015577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:31.055129   59621 cri.go:89] found id: ""
	I0319 20:38:31.055153   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.055161   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:31.055168   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:31.055225   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:31.093231   59621 cri.go:89] found id: ""
	I0319 20:38:31.093250   59621 logs.go:276] 0 containers: []
	W0319 20:38:31.093257   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:31.093264   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:31.093275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:31.148068   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:31.148103   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:31.164520   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:31.164540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:38:26.972361   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:29.471757   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.473307   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:28.666420   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.162701   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:31.089336   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.587759   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	W0319 20:38:31.244051   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:31.244079   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:31.244093   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:31.323228   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:31.323269   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.872004   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:33.886991   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:33.887047   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:33.926865   59621 cri.go:89] found id: ""
	I0319 20:38:33.926888   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.926899   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:33.926908   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:33.926961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:33.970471   59621 cri.go:89] found id: ""
	I0319 20:38:33.970506   59621 logs.go:276] 0 containers: []
	W0319 20:38:33.970517   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:33.970524   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:33.970577   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:34.008514   59621 cri.go:89] found id: ""
	I0319 20:38:34.008539   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.008546   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:34.008552   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:34.008595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:34.047124   59621 cri.go:89] found id: ""
	I0319 20:38:34.047146   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.047154   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:34.047160   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:34.047204   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:34.082611   59621 cri.go:89] found id: ""
	I0319 20:38:34.082638   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.082648   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:34.082655   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:34.082709   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:34.121120   59621 cri.go:89] found id: ""
	I0319 20:38:34.121156   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.121177   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:34.121185   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:34.121256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:34.158983   59621 cri.go:89] found id: ""
	I0319 20:38:34.159012   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.159021   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:34.159028   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:34.159082   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:34.195200   59621 cri.go:89] found id: ""
	I0319 20:38:34.195221   59621 logs.go:276] 0 containers: []
	W0319 20:38:34.195228   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:34.195236   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:34.195250   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:34.248430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:34.248459   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:34.263551   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:34.263576   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:34.336197   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:34.336223   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:34.336238   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:34.420762   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:34.420795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:33.473519   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:35.972376   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:33.665536   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.161727   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.087816   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.587570   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:36.962790   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:36.977297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:36.977355   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:37.013915   59621 cri.go:89] found id: ""
	I0319 20:38:37.013939   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.013947   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:37.013952   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:37.014010   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:37.054122   59621 cri.go:89] found id: ""
	I0319 20:38:37.054153   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.054161   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:37.054167   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:37.054223   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:37.090278   59621 cri.go:89] found id: ""
	I0319 20:38:37.090295   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.090303   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:37.090308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:37.090365   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:37.133094   59621 cri.go:89] found id: ""
	I0319 20:38:37.133117   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.133127   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:37.133134   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:37.133201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:37.171554   59621 cri.go:89] found id: ""
	I0319 20:38:37.171581   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.171593   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:37.171600   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:37.171659   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:37.209542   59621 cri.go:89] found id: ""
	I0319 20:38:37.209571   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.209579   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:37.209585   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:37.209634   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:37.248314   59621 cri.go:89] found id: ""
	I0319 20:38:37.248341   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.248352   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:37.248359   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:37.248416   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:37.287439   59621 cri.go:89] found id: ""
	I0319 20:38:37.287468   59621 logs.go:276] 0 containers: []
	W0319 20:38:37.287480   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:37.287491   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:37.287505   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:37.341576   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:37.341609   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:37.358496   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:37.358530   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:37.436292   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:37.436321   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:37.436337   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:37.514947   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:37.514980   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:40.062902   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:40.077042   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:40.077124   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:40.118301   59621 cri.go:89] found id: ""
	I0319 20:38:40.118334   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.118345   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:40.118352   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:40.118411   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:40.155677   59621 cri.go:89] found id: ""
	I0319 20:38:40.155704   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.155714   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:40.155721   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:40.155778   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:40.195088   59621 cri.go:89] found id: ""
	I0319 20:38:40.195116   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.195127   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:40.195135   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:40.195194   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:40.232588   59621 cri.go:89] found id: ""
	I0319 20:38:40.232610   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.232618   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:40.232624   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:40.232684   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:40.271623   59621 cri.go:89] found id: ""
	I0319 20:38:40.271654   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.271666   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:40.271673   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:40.271735   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:40.314900   59621 cri.go:89] found id: ""
	I0319 20:38:40.314930   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.314939   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:40.314946   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:40.315007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:40.353881   59621 cri.go:89] found id: ""
	I0319 20:38:40.353908   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.353919   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:40.353926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:40.353991   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:40.394021   59621 cri.go:89] found id: ""
	I0319 20:38:40.394045   59621 logs.go:276] 0 containers: []
	W0319 20:38:40.394056   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:40.394067   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:40.394080   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:40.447511   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:40.447540   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:40.463475   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:40.463497   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:40.539722   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:40.539747   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:40.539767   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:40.620660   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:40.620692   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:38.471727   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.472995   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:38.162339   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.162741   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:42.661979   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:40.588023   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.088381   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:45.091312   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:43.166638   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:43.181057   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:43.181121   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:43.218194   59621 cri.go:89] found id: ""
	I0319 20:38:43.218218   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.218225   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:43.218230   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:43.218277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:43.258150   59621 cri.go:89] found id: ""
	I0319 20:38:43.258180   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.258192   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:43.258199   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:43.258256   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:43.297217   59621 cri.go:89] found id: ""
	I0319 20:38:43.297243   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.297250   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:43.297257   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:43.297305   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:43.334900   59621 cri.go:89] found id: ""
	I0319 20:38:43.334928   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.334937   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:43.334943   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:43.334987   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:43.373028   59621 cri.go:89] found id: ""
	I0319 20:38:43.373053   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.373063   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:43.373071   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:43.373123   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:43.409426   59621 cri.go:89] found id: ""
	I0319 20:38:43.409455   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.409465   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:43.409472   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:43.409535   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:43.449160   59621 cri.go:89] found id: ""
	I0319 20:38:43.449190   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.449201   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:43.449208   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:43.449267   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:43.489301   59621 cri.go:89] found id: ""
	I0319 20:38:43.489329   59621 logs.go:276] 0 containers: []
	W0319 20:38:43.489342   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:43.489352   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:43.489364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:43.545249   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:43.545278   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:43.561573   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:43.561603   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:43.639650   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:43.639671   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:43.639686   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:43.718264   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:43.718296   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:42.474517   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.971377   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:44.662325   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.663603   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:47.587861   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:50.086555   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:46.265920   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:46.281381   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:46.281454   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:46.320044   59621 cri.go:89] found id: ""
	I0319 20:38:46.320076   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.320086   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:46.320094   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:46.320152   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:46.360229   59621 cri.go:89] found id: ""
	I0319 20:38:46.360272   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.360285   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:46.360293   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:46.360357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:46.401268   59621 cri.go:89] found id: ""
	I0319 20:38:46.401297   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.401304   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:46.401310   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:46.401360   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:46.438285   59621 cri.go:89] found id: ""
	I0319 20:38:46.438314   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.438325   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:46.438333   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:46.438390   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:46.474968   59621 cri.go:89] found id: ""
	I0319 20:38:46.475000   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.475013   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:46.475021   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:46.475090   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:46.514302   59621 cri.go:89] found id: ""
	I0319 20:38:46.514325   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.514335   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:46.514353   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:46.514421   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:46.555569   59621 cri.go:89] found id: ""
	I0319 20:38:46.555593   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.555603   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:46.555610   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:46.555668   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:46.596517   59621 cri.go:89] found id: ""
	I0319 20:38:46.596540   59621 logs.go:276] 0 containers: []
	W0319 20:38:46.596550   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:46.596559   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:46.596575   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.641920   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:46.641947   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:46.697550   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:46.697588   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:46.714295   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:46.714318   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:46.793332   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:46.793354   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:46.793367   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.375924   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:49.390195   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:49.390269   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:49.435497   59621 cri.go:89] found id: ""
	I0319 20:38:49.435517   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.435525   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:49.435530   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:49.435586   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:49.478298   59621 cri.go:89] found id: ""
	I0319 20:38:49.478321   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.478331   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:49.478338   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:49.478400   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:49.521482   59621 cri.go:89] found id: ""
	I0319 20:38:49.521518   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.521526   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:49.521531   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:49.521587   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:49.564812   59621 cri.go:89] found id: ""
	I0319 20:38:49.564838   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.564848   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:49.564855   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:49.564926   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:49.607198   59621 cri.go:89] found id: ""
	I0319 20:38:49.607224   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.607234   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:49.607241   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:49.607294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:49.648543   59621 cri.go:89] found id: ""
	I0319 20:38:49.648574   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.648585   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:49.648592   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:49.648656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:49.688445   59621 cri.go:89] found id: ""
	I0319 20:38:49.688474   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.688485   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:49.688492   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:49.688555   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:49.731882   59621 cri.go:89] found id: ""
	I0319 20:38:49.731903   59621 logs.go:276] 0 containers: []
	W0319 20:38:49.731910   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:49.731918   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:49.731928   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:49.783429   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:49.783458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:49.800583   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:49.800606   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:49.879698   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:49.879728   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:49.879739   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:49.955472   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:49.955504   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:46.975287   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.475667   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:49.164849   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:51.661947   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.087983   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.588099   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:52.500676   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:52.515215   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:52.515293   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:52.554677   59621 cri.go:89] found id: ""
	I0319 20:38:52.554706   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.554717   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:52.554724   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:52.554783   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:52.594776   59621 cri.go:89] found id: ""
	I0319 20:38:52.594808   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.594816   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:52.594821   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:52.594873   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:52.634667   59621 cri.go:89] found id: ""
	I0319 20:38:52.634694   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.634701   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:52.634706   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:52.634752   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:52.676650   59621 cri.go:89] found id: ""
	I0319 20:38:52.676675   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.676685   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:52.676694   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:52.676747   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:52.716138   59621 cri.go:89] found id: ""
	I0319 20:38:52.716164   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.716172   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:52.716177   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:52.716227   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:52.754253   59621 cri.go:89] found id: ""
	I0319 20:38:52.754276   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.754284   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:52.754290   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:52.754340   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:52.792247   59621 cri.go:89] found id: ""
	I0319 20:38:52.792291   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.792302   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:52.792309   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:52.792369   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:52.834381   59621 cri.go:89] found id: ""
	I0319 20:38:52.834410   59621 logs.go:276] 0 containers: []
	W0319 20:38:52.834420   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:52.834430   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:52.834444   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:52.888384   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:52.888416   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:52.904319   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:52.904345   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:52.985266   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:52.985286   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:52.985304   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:53.082291   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:53.082331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:55.629422   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:55.643144   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:55.643216   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:55.683958   59621 cri.go:89] found id: ""
	I0319 20:38:55.683983   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.683991   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:55.683996   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:55.684045   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:55.722322   59621 cri.go:89] found id: ""
	I0319 20:38:55.722353   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.722365   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:55.722373   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:55.722432   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:55.772462   59621 cri.go:89] found id: ""
	I0319 20:38:55.772491   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.772501   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:55.772508   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:55.772565   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:55.816617   59621 cri.go:89] found id: ""
	I0319 20:38:55.816643   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.816653   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:55.816661   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:55.816723   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:55.859474   59621 cri.go:89] found id: ""
	I0319 20:38:55.859502   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.859513   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:55.859520   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:55.859585   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:55.899602   59621 cri.go:89] found id: ""
	I0319 20:38:55.899632   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.899643   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:55.899650   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:55.899720   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:55.942545   59621 cri.go:89] found id: ""
	I0319 20:38:55.942574   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.942584   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:55.942590   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:55.942656   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:55.981985   59621 cri.go:89] found id: ""
	I0319 20:38:55.982009   59621 logs.go:276] 0 containers: []
	W0319 20:38:55.982017   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:55.982025   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:55.982043   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:56.062243   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:56.062264   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:56.062275   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:56.144170   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:56.144208   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:56.187015   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:56.187047   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:51.971311   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:53.971907   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:55.972358   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:54.162991   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.163316   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.588120   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:59.090000   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:56.240030   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:56.240057   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:58.756441   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:38:58.770629   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:38:58.770704   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:38:58.824609   59621 cri.go:89] found id: ""
	I0319 20:38:58.824635   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.824645   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:38:58.824653   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:38:58.824741   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:38:58.863698   59621 cri.go:89] found id: ""
	I0319 20:38:58.863727   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.863737   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:38:58.863744   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:38:58.863799   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:38:58.909832   59621 cri.go:89] found id: ""
	I0319 20:38:58.909854   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.909870   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:38:58.909878   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:38:58.909942   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:38:58.947733   59621 cri.go:89] found id: ""
	I0319 20:38:58.947761   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.947780   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:38:58.947788   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:38:58.947852   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:38:58.988658   59621 cri.go:89] found id: ""
	I0319 20:38:58.988683   59621 logs.go:276] 0 containers: []
	W0319 20:38:58.988692   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:38:58.988700   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:38:58.988781   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:38:59.032002   59621 cri.go:89] found id: ""
	I0319 20:38:59.032031   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.032041   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:38:59.032049   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:38:59.032112   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:38:59.072774   59621 cri.go:89] found id: ""
	I0319 20:38:59.072801   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.072810   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:38:59.072816   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:38:59.072879   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:38:59.113300   59621 cri.go:89] found id: ""
	I0319 20:38:59.113321   59621 logs.go:276] 0 containers: []
	W0319 20:38:59.113328   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:38:59.113335   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:38:59.113346   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:38:59.170279   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:38:59.170307   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:38:59.186357   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:38:59.186382   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:38:59.267473   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:38:59.267494   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:38:59.267506   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:38:59.344805   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:38:59.344838   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:38:57.973293   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.471215   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:38:58.662516   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:00.663859   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.588049   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.589283   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:01.891396   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:01.905465   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:01.905543   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:01.943688   59621 cri.go:89] found id: ""
	I0319 20:39:01.943720   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.943730   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:01.943736   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:01.943782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:01.988223   59621 cri.go:89] found id: ""
	I0319 20:39:01.988246   59621 logs.go:276] 0 containers: []
	W0319 20:39:01.988253   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:01.988270   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:01.988335   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:02.027863   59621 cri.go:89] found id: ""
	I0319 20:39:02.027893   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.027901   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:02.027908   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:02.027953   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:02.067758   59621 cri.go:89] found id: ""
	I0319 20:39:02.067784   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.067793   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:02.067799   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:02.067842   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:02.106753   59621 cri.go:89] found id: ""
	I0319 20:39:02.106780   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.106792   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:02.106800   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:02.106858   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:02.143699   59621 cri.go:89] found id: ""
	I0319 20:39:02.143728   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.143738   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:02.143745   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:02.143791   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:02.189363   59621 cri.go:89] found id: ""
	I0319 20:39:02.189413   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.189424   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:02.189431   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:02.189492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:02.225964   59621 cri.go:89] found id: ""
	I0319 20:39:02.225995   59621 logs.go:276] 0 containers: []
	W0319 20:39:02.226006   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:02.226016   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:02.226033   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:02.303895   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:02.303923   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:02.303941   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:02.384456   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:02.384486   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.431440   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:02.431474   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:02.486490   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:02.486524   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.003725   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:05.018200   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:05.018276   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:05.056894   59621 cri.go:89] found id: ""
	I0319 20:39:05.056918   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.056926   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:05.056932   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:05.056977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:05.094363   59621 cri.go:89] found id: ""
	I0319 20:39:05.094394   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.094404   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:05.094411   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:05.094465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:05.131524   59621 cri.go:89] found id: ""
	I0319 20:39:05.131549   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.131561   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:05.131568   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:05.131623   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:05.169844   59621 cri.go:89] found id: ""
	I0319 20:39:05.169880   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.169891   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:05.169899   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:05.169948   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:05.228409   59621 cri.go:89] found id: ""
	I0319 20:39:05.228437   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.228447   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:05.228455   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:05.228506   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:05.292940   59621 cri.go:89] found id: ""
	I0319 20:39:05.292964   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.292971   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:05.292978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:05.293028   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:05.344589   59621 cri.go:89] found id: ""
	I0319 20:39:05.344611   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.344617   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:05.344625   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:05.344685   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:05.385149   59621 cri.go:89] found id: ""
	I0319 20:39:05.385175   59621 logs.go:276] 0 containers: []
	W0319 20:39:05.385183   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:05.385191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:05.385203   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:05.439327   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:05.439361   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:05.455696   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:05.455723   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:05.531762   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:05.531784   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:05.531795   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:05.616581   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:05.616612   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:02.471981   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:04.472495   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:03.164344   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:05.665651   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:06.086880   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.088337   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.166281   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:08.180462   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:08.180533   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:08.219192   59621 cri.go:89] found id: ""
	I0319 20:39:08.219213   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.219220   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:08.219225   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:08.219283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:08.257105   59621 cri.go:89] found id: ""
	I0319 20:39:08.257129   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.257137   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:08.257142   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:08.257201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:08.294620   59621 cri.go:89] found id: ""
	I0319 20:39:08.294646   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.294656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:08.294674   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:08.294730   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:08.333399   59621 cri.go:89] found id: ""
	I0319 20:39:08.333428   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.333436   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:08.333442   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:08.333490   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:08.374601   59621 cri.go:89] found id: ""
	I0319 20:39:08.374625   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.374632   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:08.374638   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:08.374697   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:08.415300   59621 cri.go:89] found id: ""
	I0319 20:39:08.415327   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.415337   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:08.415345   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:08.415410   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:08.457722   59621 cri.go:89] found id: ""
	I0319 20:39:08.457751   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.457762   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:08.457770   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:08.457830   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:08.501591   59621 cri.go:89] found id: ""
	I0319 20:39:08.501620   59621 logs.go:276] 0 containers: []
	W0319 20:39:08.501630   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:08.501640   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:08.501653   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:08.554764   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:08.554801   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:08.570587   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:08.570611   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:08.647513   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:08.647536   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:08.647555   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:08.728352   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:08.728387   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:06.971135   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.971957   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.473482   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:08.162486   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.662096   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:12.662841   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:10.587271   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:13.087563   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:15.088454   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:11.279199   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:11.298588   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:11.298700   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:11.340860   59621 cri.go:89] found id: ""
	I0319 20:39:11.340887   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.340897   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:11.340905   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:11.340961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:11.384360   59621 cri.go:89] found id: ""
	I0319 20:39:11.384386   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.384398   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:11.384405   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:11.384468   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:11.424801   59621 cri.go:89] found id: ""
	I0319 20:39:11.424828   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.424839   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:11.424846   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:11.424907   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:11.464154   59621 cri.go:89] found id: ""
	I0319 20:39:11.464181   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.464192   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:11.464199   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:11.464279   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:11.507608   59621 cri.go:89] found id: ""
	I0319 20:39:11.507635   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.507645   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:11.507653   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:11.507712   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:11.551502   59621 cri.go:89] found id: ""
	I0319 20:39:11.551530   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.551541   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:11.551548   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:11.551613   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:11.590798   59621 cri.go:89] found id: ""
	I0319 20:39:11.590827   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.590837   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:11.590844   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:11.590905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:11.635610   59621 cri.go:89] found id: ""
	I0319 20:39:11.635640   59621 logs.go:276] 0 containers: []
	W0319 20:39:11.635650   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:11.635661   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:11.635676   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:11.690191   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:11.690219   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:11.744430   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:11.744458   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:11.760012   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:11.760038   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:11.839493   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:11.839511   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:11.839529   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:14.420960   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:14.436605   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:14.436680   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:14.476358   59621 cri.go:89] found id: ""
	I0319 20:39:14.476384   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.476391   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:14.476397   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:14.476441   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:14.517577   59621 cri.go:89] found id: ""
	I0319 20:39:14.517605   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.517616   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:14.517623   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:14.517690   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:14.557684   59621 cri.go:89] found id: ""
	I0319 20:39:14.557710   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.557721   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:14.557729   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:14.557788   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:14.602677   59621 cri.go:89] found id: ""
	I0319 20:39:14.602702   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.602712   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:14.602719   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:14.602776   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:14.643181   59621 cri.go:89] found id: ""
	I0319 20:39:14.643204   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.643211   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:14.643217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:14.643273   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:14.684923   59621 cri.go:89] found id: ""
	I0319 20:39:14.684950   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.684962   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:14.684970   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:14.685027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:14.723090   59621 cri.go:89] found id: ""
	I0319 20:39:14.723127   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.723138   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:14.723145   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:14.723201   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:14.768244   59621 cri.go:89] found id: ""
	I0319 20:39:14.768290   59621 logs.go:276] 0 containers: []
	W0319 20:39:14.768302   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:14.768312   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:14.768331   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:14.824963   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:14.825010   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:14.841489   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:14.841517   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:14.927532   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:14.927556   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:14.927571   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:15.011126   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:15.011161   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:13.972462   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.471598   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:14.664028   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:16.665749   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.587968   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.087138   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:17.557482   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:17.571926   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:17.571990   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:17.615828   59621 cri.go:89] found id: ""
	I0319 20:39:17.615864   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.615872   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:17.615878   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:17.615938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:17.657617   59621 cri.go:89] found id: ""
	I0319 20:39:17.657656   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.657666   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:17.657674   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:17.657738   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:17.696927   59621 cri.go:89] found id: ""
	I0319 20:39:17.696951   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.696962   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:17.696969   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:17.697027   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:17.738101   59621 cri.go:89] found id: ""
	I0319 20:39:17.738126   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.738135   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:17.738143   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:17.738199   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:17.781553   59621 cri.go:89] found id: ""
	I0319 20:39:17.781580   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.781591   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:17.781598   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:17.781658   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:17.825414   59621 cri.go:89] found id: ""
	I0319 20:39:17.825435   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.825442   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:17.825448   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:17.825492   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:17.866117   59621 cri.go:89] found id: ""
	I0319 20:39:17.866149   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.866160   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:17.866182   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:17.866241   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:17.907696   59621 cri.go:89] found id: ""
	I0319 20:39:17.907720   59621 logs.go:276] 0 containers: []
	W0319 20:39:17.907728   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:17.907735   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:17.907747   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:17.949127   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:17.949159   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:18.001481   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:18.001515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.017516   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:18.017542   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:18.096338   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:18.096367   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:18.096384   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:20.678630   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:20.693649   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:20.693722   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:20.733903   59621 cri.go:89] found id: ""
	I0319 20:39:20.733937   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.733949   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:20.733957   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:20.734017   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:20.773234   59621 cri.go:89] found id: ""
	I0319 20:39:20.773261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.773268   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:20.773274   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:20.773328   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:20.810218   59621 cri.go:89] found id: ""
	I0319 20:39:20.810261   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.810273   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:20.810280   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:20.810338   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:20.850549   59621 cri.go:89] found id: ""
	I0319 20:39:20.850581   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.850594   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:20.850603   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:20.850694   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:20.895309   59621 cri.go:89] found id: ""
	I0319 20:39:20.895339   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.895351   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:20.895364   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:20.895430   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:20.941912   59621 cri.go:89] found id: ""
	I0319 20:39:20.941942   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.941951   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:20.941959   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:20.942020   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:20.981933   59621 cri.go:89] found id: ""
	I0319 20:39:20.981960   59621 logs.go:276] 0 containers: []
	W0319 20:39:20.981970   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:20.981978   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:20.982035   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:21.020824   59621 cri.go:89] found id: ""
	I0319 20:39:21.020854   59621 logs.go:276] 0 containers: []
	W0319 20:39:21.020864   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:21.020875   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:21.020889   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:21.104460   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:21.104492   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:21.162209   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:21.162237   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:21.215784   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:21.215813   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:18.471693   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:20.473198   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:19.162423   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.164242   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:22.087921   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.089243   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:21.232036   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:21.232060   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:21.314787   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:23.815401   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:23.830032   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:23.830107   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:23.871520   59621 cri.go:89] found id: ""
	I0319 20:39:23.871542   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.871550   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:23.871556   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:23.871609   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:23.913135   59621 cri.go:89] found id: ""
	I0319 20:39:23.913158   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.913165   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:23.913171   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:23.913222   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:23.954617   59621 cri.go:89] found id: ""
	I0319 20:39:23.954648   59621 logs.go:276] 0 containers: []
	W0319 20:39:23.954656   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:23.954662   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:23.954734   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:24.000350   59621 cri.go:89] found id: ""
	I0319 20:39:24.000373   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.000388   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:24.000394   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:24.000453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:24.040732   59621 cri.go:89] found id: ""
	I0319 20:39:24.040784   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.040796   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:24.040804   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:24.040868   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:24.077796   59621 cri.go:89] found id: ""
	I0319 20:39:24.077823   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.077831   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:24.077838   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:24.077900   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:24.122169   59621 cri.go:89] found id: ""
	I0319 20:39:24.122200   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.122209   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:24.122217   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:24.122277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:24.162526   59621 cri.go:89] found id: ""
	I0319 20:39:24.162550   59621 logs.go:276] 0 containers: []
	W0319 20:39:24.162557   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:24.162566   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:24.162580   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:24.216019   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:24.216052   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:24.234041   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:24.234069   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:24.310795   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:24.310818   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:24.310832   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:24.391968   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:24.392003   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:22.971141   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:24.971943   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:23.663805   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.162590   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.587708   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.588720   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:26.939643   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:26.954564   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:26.954622   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:26.996358   59621 cri.go:89] found id: ""
	I0319 20:39:26.996392   59621 logs.go:276] 0 containers: []
	W0319 20:39:26.996402   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:26.996410   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:26.996471   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:27.037031   59621 cri.go:89] found id: ""
	I0319 20:39:27.037062   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.037072   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:27.037080   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:27.037137   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:27.075646   59621 cri.go:89] found id: ""
	I0319 20:39:27.075673   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.075683   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:27.075691   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:27.075743   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:27.115110   59621 cri.go:89] found id: ""
	I0319 20:39:27.115139   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.115150   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:27.115158   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:27.115218   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:27.156783   59621 cri.go:89] found id: ""
	I0319 20:39:27.156811   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.156823   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:27.156830   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:27.156875   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:27.199854   59621 cri.go:89] found id: ""
	I0319 20:39:27.199886   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.199897   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:27.199903   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:27.199959   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:27.241795   59621 cri.go:89] found id: ""
	I0319 20:39:27.241825   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.241836   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:27.241843   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:27.241905   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:27.280984   59621 cri.go:89] found id: ""
	I0319 20:39:27.281014   59621 logs.go:276] 0 containers: []
	W0319 20:39:27.281025   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:27.281036   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:27.281051   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:27.332842   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:27.332878   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:27.349438   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:27.349468   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:27.433360   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:27.433386   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:27.433402   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:27.516739   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:27.516774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:30.063986   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:30.081574   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:30.081644   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:30.128350   59621 cri.go:89] found id: ""
	I0319 20:39:30.128380   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.128392   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:30.128399   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:30.128462   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:30.167918   59621 cri.go:89] found id: ""
	I0319 20:39:30.167938   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.167945   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:30.167950   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:30.167999   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:30.207491   59621 cri.go:89] found id: ""
	I0319 20:39:30.207524   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.207535   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:30.207542   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:30.207608   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:30.248590   59621 cri.go:89] found id: ""
	I0319 20:39:30.248612   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.248620   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:30.248626   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:30.248670   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:30.287695   59621 cri.go:89] found id: ""
	I0319 20:39:30.287722   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.287730   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:30.287735   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:30.287795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:30.333934   59621 cri.go:89] found id: ""
	I0319 20:39:30.333958   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.333966   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:30.333971   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:30.334023   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:30.375015   59621 cri.go:89] found id: ""
	I0319 20:39:30.375040   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.375049   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:30.375056   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:30.375117   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:30.415651   59621 cri.go:89] found id: ""
	I0319 20:39:30.415675   59621 logs.go:276] 0 containers: []
	W0319 20:39:30.415681   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:30.415689   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:30.415700   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:30.476141   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:30.476170   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:30.491487   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:30.491515   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:30.573754   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:30.573777   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:30.573802   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:30.652216   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:30.652247   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:26.972042   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.972160   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:30.973402   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:28.664060   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.161446   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:31.092087   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.588849   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.198826   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:33.215407   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:33.215504   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:33.262519   59621 cri.go:89] found id: ""
	I0319 20:39:33.262546   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.262554   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:33.262559   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:33.262604   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:33.303694   59621 cri.go:89] found id: ""
	I0319 20:39:33.303720   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.303731   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:33.303738   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:33.303798   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:33.343253   59621 cri.go:89] found id: ""
	I0319 20:39:33.343275   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.343283   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:33.343289   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:33.343345   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:33.385440   59621 cri.go:89] found id: ""
	I0319 20:39:33.385463   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.385470   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:33.385476   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:33.385529   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:33.426332   59621 cri.go:89] found id: ""
	I0319 20:39:33.426362   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.426372   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:33.426387   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:33.426465   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:33.473819   59621 cri.go:89] found id: ""
	I0319 20:39:33.473843   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.473853   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:33.473860   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:33.473938   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:33.524667   59621 cri.go:89] found id: ""
	I0319 20:39:33.524694   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.524704   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:33.524711   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:33.524769   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:33.590149   59621 cri.go:89] found id: ""
	I0319 20:39:33.590170   59621 logs.go:276] 0 containers: []
	W0319 20:39:33.590180   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:33.590189   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:33.590204   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:33.648946   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:33.649016   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:33.666349   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:33.666381   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:33.740317   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:33.740343   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:33.740364   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:33.831292   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:33.831330   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:33.473205   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.971076   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:33.162170   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.164007   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:37.662820   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:35.588912   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:38.086910   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:40.089385   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:36.380654   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:36.395707   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:36.395782   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:36.435342   59621 cri.go:89] found id: ""
	I0319 20:39:36.435370   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.435377   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:36.435384   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:36.435433   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:36.478174   59621 cri.go:89] found id: ""
	I0319 20:39:36.478201   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.478213   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:36.478220   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:36.478277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:36.519262   59621 cri.go:89] found id: ""
	I0319 20:39:36.519292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.519302   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:36.519308   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:36.519353   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:36.555974   59621 cri.go:89] found id: ""
	I0319 20:39:36.556003   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.556011   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:36.556017   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:36.556062   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:36.598264   59621 cri.go:89] found id: ""
	I0319 20:39:36.598292   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.598305   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:36.598311   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:36.598357   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:36.635008   59621 cri.go:89] found id: ""
	I0319 20:39:36.635035   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.635046   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:36.635053   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:36.635110   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:36.679264   59621 cri.go:89] found id: ""
	I0319 20:39:36.679287   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.679297   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:36.679304   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:36.679391   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:36.720353   59621 cri.go:89] found id: ""
	I0319 20:39:36.720409   59621 logs.go:276] 0 containers: []
	W0319 20:39:36.720419   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:36.720430   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:36.720450   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:36.804124   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:36.804155   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:36.851795   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:36.851826   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:36.911233   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:36.911262   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:36.926684   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:36.926713   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:37.003849   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:39.504955   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:39.520814   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:39.520889   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:39.566992   59621 cri.go:89] found id: ""
	I0319 20:39:39.567017   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.567024   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:39.567030   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:39.567094   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:39.612890   59621 cri.go:89] found id: ""
	I0319 20:39:39.612920   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.612930   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:39.612938   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:39.613005   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:39.655935   59621 cri.go:89] found id: ""
	I0319 20:39:39.655964   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.655976   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:39.655984   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:39.656060   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:39.697255   59621 cri.go:89] found id: ""
	I0319 20:39:39.697283   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.697294   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:39.697301   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:39.697358   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:39.737468   59621 cri.go:89] found id: ""
	I0319 20:39:39.737501   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.737508   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:39.737514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:39.737568   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:39.775282   59621 cri.go:89] found id: ""
	I0319 20:39:39.775306   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.775314   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:39.775319   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:39.775405   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:39.814944   59621 cri.go:89] found id: ""
	I0319 20:39:39.814973   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.814982   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:39.814990   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:39.815049   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:39.860951   59621 cri.go:89] found id: ""
	I0319 20:39:39.860977   59621 logs.go:276] 0 containers: []
	W0319 20:39:39.860987   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:39.860997   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:39.861011   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:39.922812   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:39.922849   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:39.939334   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:39.939360   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:40.049858   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:40.049895   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:40.049911   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:40.139797   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:40.139828   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:37.971651   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.973467   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:39.663277   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.162392   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.587250   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.589855   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:42.687261   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:42.704425   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:42.704512   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:42.745507   59621 cri.go:89] found id: ""
	I0319 20:39:42.745534   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.745542   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:42.745548   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:42.745595   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:42.783895   59621 cri.go:89] found id: ""
	I0319 20:39:42.783929   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.783940   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:42.783947   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:42.784007   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:42.823690   59621 cri.go:89] found id: ""
	I0319 20:39:42.823720   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.823732   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:42.823738   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:42.823795   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:42.865556   59621 cri.go:89] found id: ""
	I0319 20:39:42.865581   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.865591   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:42.865606   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:42.865661   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:42.907479   59621 cri.go:89] found id: ""
	I0319 20:39:42.907501   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.907509   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:42.907514   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:42.907557   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:42.951940   59621 cri.go:89] found id: ""
	I0319 20:39:42.951974   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.951985   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:42.951992   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:42.952053   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.997854   59621 cri.go:89] found id: ""
	I0319 20:39:42.997886   59621 logs.go:276] 0 containers: []
	W0319 20:39:42.997896   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:42.997904   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:42.997961   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:43.042240   59621 cri.go:89] found id: ""
	I0319 20:39:43.042278   59621 logs.go:276] 0 containers: []
	W0319 20:39:43.042295   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:43.042306   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:43.042329   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:43.056792   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:43.056815   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:43.142211   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:43.142229   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:43.142243   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:43.228553   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:43.228591   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:43.277536   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:43.277565   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:45.838607   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:45.860510   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.860592   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.926869   59621 cri.go:89] found id: ""
	I0319 20:39:45.926901   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.926912   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:39:45.926919   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.926977   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.980027   59621 cri.go:89] found id: ""
	I0319 20:39:45.980052   59621 logs.go:276] 0 containers: []
	W0319 20:39:45.980063   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:39:45.980070   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.980129   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:46.045211   59621 cri.go:89] found id: ""
	I0319 20:39:46.045247   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.045258   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:39:46.045269   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:46.045332   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:46.086706   59621 cri.go:89] found id: ""
	I0319 20:39:46.086729   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.086739   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:39:46.086747   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:46.086807   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:46.131454   59621 cri.go:89] found id: ""
	I0319 20:39:46.131481   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.131492   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:39:46.131499   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:46.131573   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:46.175287   59621 cri.go:89] found id: ""
	I0319 20:39:46.175315   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.175325   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:39:46.175331   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:46.175395   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:42.472493   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.973064   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:44.162740   59415 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:45.162232   59415 pod_ready.go:81] duration metric: took 4m0.006756965s for pod "metrics-server-57f55c9bc5-xbh7v" in "kube-system" namespace to be "Ready" ...
	E0319 20:39:45.162255   59415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:39:45.162262   59415 pod_ready.go:38] duration metric: took 4m8.418792568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:39:45.162277   59415 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:39:45.162309   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:45.162363   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:45.219659   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:45.219685   59415 cri.go:89] found id: ""
	I0319 20:39:45.219694   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:45.219737   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.225012   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:45.225072   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:45.268783   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.268803   59415 cri.go:89] found id: ""
	I0319 20:39:45.268810   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:45.268875   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.273758   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:45.273813   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:45.316870   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.316893   59415 cri.go:89] found id: ""
	I0319 20:39:45.316901   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:45.316942   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.321910   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:45.321968   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:45.360077   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:45.360098   59415 cri.go:89] found id: ""
	I0319 20:39:45.360105   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:45.360157   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.365517   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:45.365580   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:45.407686   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:45.407704   59415 cri.go:89] found id: ""
	I0319 20:39:45.407711   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:45.407752   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.412894   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:45.412954   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:45.451930   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:45.451953   59415 cri.go:89] found id: ""
	I0319 20:39:45.451964   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:45.452009   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.456634   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:45.456699   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:45.498575   59415 cri.go:89] found id: ""
	I0319 20:39:45.498601   59415 logs.go:276] 0 containers: []
	W0319 20:39:45.498611   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:45.498619   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:45.498678   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:45.548381   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.548400   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.548405   59415 cri.go:89] found id: ""
	I0319 20:39:45.548411   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:45.548469   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.553470   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:45.558445   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:45.558471   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:45.603464   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:45.603490   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:45.650631   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:45.650663   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:45.668744   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:45.668775   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:45.823596   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:45.823625   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:45.891879   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:45.891911   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:45.944237   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:45.944284   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:46.005819   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:46.005848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:46.069819   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.069848   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.648008   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.648051   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:46.701035   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.701073   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.753159   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:46.753189   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:46.804730   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:46.804767   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:47.087453   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.088165   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:46.219167   59621 cri.go:89] found id: ""
	I0319 20:39:46.220447   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.220458   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:46.220463   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:39:46.220509   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:39:46.260031   59621 cri.go:89] found id: ""
	I0319 20:39:46.260056   59621 logs.go:276] 0 containers: []
	W0319 20:39:46.260064   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:39:46.260072   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:46.260087   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:46.314744   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:46.314774   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:46.331752   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:46.331781   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:39:46.413047   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:39:46.413071   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:46.413082   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:46.521930   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:39:46.521959   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.068570   59621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.083471   59621 kubeadm.go:591] duration metric: took 4m3.773669285s to restartPrimaryControlPlane
	W0319 20:39:49.083553   59621 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:39:49.083587   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:39:51.077482   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.993876364s)
	I0319 20:39:51.077569   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:51.096308   59621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:39:51.109534   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:39:51.121863   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:39:51.121882   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:39:51.121925   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:39:51.133221   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:39:51.133265   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:39:51.144678   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:39:51.155937   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:39:51.155998   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:39:51.167490   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.179833   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:39:51.179881   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:39:51.192446   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:39:51.204562   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:39:51.204615   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:39:51.216879   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:39:47.471171   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:49.472374   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:51.304526   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:39:51.304604   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:39:51.475356   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:39:51.475523   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:39:51.475670   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:39:51.688962   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:39:51.690682   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:39:51.690764   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:39:51.690847   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:39:51.690971   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:39:51.691063   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:39:51.691162   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:39:51.691254   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:39:51.691347   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:39:51.691441   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:39:51.691567   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:39:51.691706   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:39:51.691761   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:39:51.691852   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:39:51.840938   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:39:51.902053   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:39:52.213473   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:39:52.366242   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:39:52.381307   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:39:52.382441   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:39:52.382543   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:39:52.543512   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:39:49.351186   59415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:39:49.368780   59415 api_server.go:72] duration metric: took 4m19.832131165s to wait for apiserver process to appear ...
	I0319 20:39:49.368806   59415 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:39:49.368844   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:49.368913   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:49.408912   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:49.408937   59415 cri.go:89] found id: ""
	I0319 20:39:49.408947   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:49.409010   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.414194   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:49.414263   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:49.456271   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.456298   59415 cri.go:89] found id: ""
	I0319 20:39:49.456307   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:49.456374   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.461250   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:49.461316   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:49.510029   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:49.510052   59415 cri.go:89] found id: ""
	I0319 20:39:49.510061   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:49.510119   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.515604   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:49.515667   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:49.561004   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:49.561026   59415 cri.go:89] found id: ""
	I0319 20:39:49.561034   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:49.561100   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.566205   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:49.566276   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:49.610666   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:49.610685   59415 cri.go:89] found id: ""
	I0319 20:39:49.610693   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:49.610735   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.615683   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:49.615730   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:49.657632   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:49.657648   59415 cri.go:89] found id: ""
	I0319 20:39:49.657655   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:49.657711   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.662128   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:49.662172   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:49.699037   59415 cri.go:89] found id: ""
	I0319 20:39:49.699060   59415 logs.go:276] 0 containers: []
	W0319 20:39:49.699068   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:49.699074   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:49.699131   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:49.754331   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:49.754353   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:49.754359   59415 cri.go:89] found id: ""
	I0319 20:39:49.754368   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:49.754437   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.759210   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:49.763797   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:49.763816   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:49.818285   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:49.818314   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:49.946232   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:49.946266   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:49.994160   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:49.994186   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:50.042893   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:50.042923   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:50.099333   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:50.099362   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:50.547046   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:50.547082   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:50.593081   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:50.593111   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:50.632611   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:50.632643   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:50.689610   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:50.689641   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:50.707961   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:50.707997   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:50.752684   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:50.752713   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:50.790114   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:50.790139   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:51.089647   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.588183   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:52.545387   59621 out.go:204]   - Booting up control plane ...
	I0319 20:39:52.545507   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:39:52.559916   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:39:52.560005   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:39:52.560471   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:39:52.564563   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:39:51.972170   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:54.471260   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:56.472093   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:53.338254   59415 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0319 20:39:53.343669   59415 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0319 20:39:53.344796   59415 api_server.go:141] control plane version: v1.29.3
	I0319 20:39:53.344816   59415 api_server.go:131] duration metric: took 3.976004163s to wait for apiserver health ...
	I0319 20:39:53.344824   59415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:39:53.344854   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:39:53.344896   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:39:53.407914   59415 cri.go:89] found id: "e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.407939   59415 cri.go:89] found id: ""
	I0319 20:39:53.407948   59415 logs.go:276] 1 containers: [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166]
	I0319 20:39:53.408000   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.414299   59415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:39:53.414360   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:39:53.466923   59415 cri.go:89] found id: "c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:53.466944   59415 cri.go:89] found id: ""
	I0319 20:39:53.466953   59415 logs.go:276] 1 containers: [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8]
	I0319 20:39:53.467006   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.472181   59415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:39:53.472247   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:39:53.511808   59415 cri.go:89] found id: "2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:53.511830   59415 cri.go:89] found id: ""
	I0319 20:39:53.511839   59415 logs.go:276] 1 containers: [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef]
	I0319 20:39:53.511900   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.517386   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:39:53.517445   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:39:53.560360   59415 cri.go:89] found id: "f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:53.560383   59415 cri.go:89] found id: ""
	I0319 20:39:53.560390   59415 logs.go:276] 1 containers: [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be]
	I0319 20:39:53.560433   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.565131   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:39:53.565181   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:39:53.611243   59415 cri.go:89] found id: "b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:53.611264   59415 cri.go:89] found id: ""
	I0319 20:39:53.611273   59415 logs.go:276] 1 containers: [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748]
	I0319 20:39:53.611326   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.616327   59415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:39:53.616391   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:39:53.656775   59415 cri.go:89] found id: "33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:53.656794   59415 cri.go:89] found id: ""
	I0319 20:39:53.656801   59415 logs.go:276] 1 containers: [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3]
	I0319 20:39:53.656846   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.661915   59415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:39:53.661966   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:39:53.700363   59415 cri.go:89] found id: ""
	I0319 20:39:53.700389   59415 logs.go:276] 0 containers: []
	W0319 20:39:53.700396   59415 logs.go:278] No container was found matching "kindnet"
	I0319 20:39:53.700401   59415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0319 20:39:53.700454   59415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0319 20:39:53.750337   59415 cri.go:89] found id: "54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:53.750357   59415 cri.go:89] found id: "7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:53.750360   59415 cri.go:89] found id: ""
	I0319 20:39:53.750373   59415 logs.go:276] 2 containers: [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5]
	I0319 20:39:53.750426   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.755835   59415 ssh_runner.go:195] Run: which crictl
	I0319 20:39:53.761078   59415 logs.go:123] Gathering logs for kubelet ...
	I0319 20:39:53.761099   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:39:53.812898   59415 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:39:53.812928   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 20:39:53.934451   59415 logs.go:123] Gathering logs for kube-apiserver [e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166] ...
	I0319 20:39:53.934482   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2f9da9940d123bb27402fd6d768832843cae44201cf244cbf14dd118579b166"
	I0319 20:39:53.989117   59415 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:39:53.989148   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:39:54.386028   59415 logs.go:123] Gathering logs for storage-provisioner [7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5] ...
	I0319 20:39:54.386060   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cf3f6946847fa8b638e151fe37b710ad6d3d02680efbbe79e1efa391c23bff5"
	I0319 20:39:54.437864   59415 logs.go:123] Gathering logs for dmesg ...
	I0319 20:39:54.437893   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:39:54.456559   59415 logs.go:123] Gathering logs for etcd [c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8] ...
	I0319 20:39:54.456584   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2391bc9672e3e7c0889c974301f8394ee09c45422111d97613b6abbdeebc1a8"
	I0319 20:39:54.506564   59415 logs.go:123] Gathering logs for coredns [2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef] ...
	I0319 20:39:54.506593   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b137c65a31110b8300d990c9ace2335d350694c45d991b8930118c7226b03ef"
	I0319 20:39:54.551120   59415 logs.go:123] Gathering logs for kube-scheduler [f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be] ...
	I0319 20:39:54.551151   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f6bbd4f740d369809754f9862d67f99bcd037e2d7293d174a74fa94ef291be"
	I0319 20:39:54.595768   59415 logs.go:123] Gathering logs for kube-proxy [b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748] ...
	I0319 20:39:54.595794   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8bd4bb1ef229a3de3017dd9e1eccaf220712a8f99522343296ff67dd8475748"
	I0319 20:39:54.637715   59415 logs.go:123] Gathering logs for kube-controller-manager [33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3] ...
	I0319 20:39:54.637745   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f6eb05f3ff8af97188d9eb69d08fa72f0dff9f8191916ebb3d94afb6feecd3"
	I0319 20:39:54.689666   59415 logs.go:123] Gathering logs for storage-provisioner [54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff] ...
	I0319 20:39:54.689706   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54948b2ac3f0189a1a1fc6caa6f321a5bd8b04dc4e444f0f2d0267016cdf41ff"
	I0319 20:39:54.731821   59415 logs.go:123] Gathering logs for container status ...
	I0319 20:39:54.731851   59415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 20:39:57.287839   59415 system_pods.go:59] 8 kube-system pods found
	I0319 20:39:57.287866   59415 system_pods.go:61] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.287870   59415 system_pods.go:61] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.287874   59415 system_pods.go:61] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.287878   59415 system_pods.go:61] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.287881   59415 system_pods.go:61] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.287884   59415 system_pods.go:61] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.287890   59415 system_pods.go:61] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.287894   59415 system_pods.go:61] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.287901   59415 system_pods.go:74] duration metric: took 3.943071923s to wait for pod list to return data ...
	I0319 20:39:57.287907   59415 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:39:57.290568   59415 default_sa.go:45] found service account: "default"
	I0319 20:39:57.290587   59415 default_sa.go:55] duration metric: took 2.674741ms for default service account to be created ...
	I0319 20:39:57.290594   59415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:39:57.296691   59415 system_pods.go:86] 8 kube-system pods found
	I0319 20:39:57.296710   59415 system_pods.go:89] "coredns-76f75df574-9tdfg" [f1b2be11-82a4-49cd-b937-ed38214db991] Running
	I0319 20:39:57.296718   59415 system_pods.go:89] "etcd-embed-certs-421660" [e274d447-6d81-4dfb-b0fb-d77283e086f1] Running
	I0319 20:39:57.296722   59415 system_pods.go:89] "kube-apiserver-embed-certs-421660" [77d14ac9-c1c2-470f-b9d9-15b3524c8317] Running
	I0319 20:39:57.296726   59415 system_pods.go:89] "kube-controller-manager-embed-certs-421660" [d8980373-cb27-4590-8732-8108cedfbf45] Running
	I0319 20:39:57.296730   59415 system_pods.go:89] "kube-proxy-qvn26" [9d2869d5-3602-4cc0-80c1-cf01cda5971c] Running
	I0319 20:39:57.296734   59415 system_pods.go:89] "kube-scheduler-embed-certs-421660" [b2babc25-5f9f-428f-8445-60a61b763b53] Running
	I0319 20:39:57.296741   59415 system_pods.go:89] "metrics-server-57f55c9bc5-xbh7v" [7cb1baf4-fcb9-4126-9437-45fc6228821f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:39:57.296747   59415 system_pods.go:89] "storage-provisioner" [b84b7ff7-ed12-4404-b142-2c331a84cea0] Running
	I0319 20:39:57.296753   59415 system_pods.go:126] duration metric: took 6.154905ms to wait for k8s-apps to be running ...
	I0319 20:39:57.296762   59415 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:39:57.296803   59415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:39:57.313729   59415 system_svc.go:56] duration metric: took 16.960151ms WaitForService to wait for kubelet
	I0319 20:39:57.313753   59415 kubeadm.go:576] duration metric: took 4m27.777105553s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:39:57.313777   59415 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:39:57.316765   59415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:39:57.316789   59415 node_conditions.go:123] node cpu capacity is 2
	I0319 20:39:57.316803   59415 node_conditions.go:105] duration metric: took 3.021397ms to run NodePressure ...
	I0319 20:39:57.316813   59415 start.go:240] waiting for startup goroutines ...
	I0319 20:39:57.316820   59415 start.go:245] waiting for cluster config update ...
	I0319 20:39:57.316830   59415 start.go:254] writing updated cluster config ...
	I0319 20:39:57.317087   59415 ssh_runner.go:195] Run: rm -f paused
	I0319 20:39:57.365814   59415 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:39:57.368111   59415 out.go:177] * Done! kubectl is now configured to use "embed-certs-421660" cluster and "default" namespace by default
	I0319 20:39:56.088199   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.088480   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.091027   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:39:58.971917   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:00.972329   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:02.589430   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.088313   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:03.474330   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:05.972928   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:07.587315   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:09.588829   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:08.471254   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:10.472963   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.087905   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:14.589786   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:12.973661   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:15.471559   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.087489   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.087559   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:17.473159   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:19.975538   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:21.090446   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:23.588215   60008 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.581466   60008 pod_ready.go:81] duration metric: took 4m0.000988658s for pod "metrics-server-57f55c9bc5-ddl2q" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:24.581495   60008 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0319 20:40:24.581512   60008 pod_ready.go:38] duration metric: took 4m13.547382951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:24.581535   60008 kubeadm.go:591] duration metric: took 4m20.894503953s to restartPrimaryControlPlane
	W0319 20:40:24.581583   60008 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:24.581611   60008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:22.472853   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:24.972183   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:26.973460   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:28.974127   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:31.475479   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:32.565374   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:40:32.566581   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:32.566753   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:33.973020   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:36.471909   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:37.567144   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:37.567356   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:38.473008   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:40.975638   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:43.473149   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:45.474566   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:47.567760   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:40:47.568053   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:40:47.972615   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:50.472593   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:52.973302   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:55.472067   59019 pod_ready.go:102] pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace has status "Ready":"False"
	I0319 20:40:56.465422   59019 pod_ready.go:81] duration metric: took 4m0.000285496s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" ...
	E0319 20:40:56.465453   59019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jvlnl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0319 20:40:56.465495   59019 pod_ready.go:38] duration metric: took 4m7.567400515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:40:56.465521   59019 kubeadm.go:591] duration metric: took 4m16.916387223s to restartPrimaryControlPlane
	W0319 20:40:56.465574   59019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0319 20:40:56.465604   59019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:40:56.963018   60008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.381377433s)
	I0319 20:40:56.963106   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:40:56.982252   60008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:40:56.994310   60008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:40:57.004950   60008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:40:57.004974   60008 kubeadm.go:156] found existing configuration files:
	
	I0319 20:40:57.005018   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0319 20:40:57.015009   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:40:57.015070   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:40:57.026153   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0319 20:40:57.036560   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:40:57.036611   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:40:57.047469   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.060137   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:40:57.060188   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:40:57.073305   60008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0319 20:40:57.083299   60008 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:40:57.083372   60008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:40:57.093788   60008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:40:57.352358   60008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:05.910387   60008 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0319 20:41:05.910460   60008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:05.910542   60008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:05.910660   60008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:05.910798   60008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:05.910903   60008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:05.912366   60008 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:05.912439   60008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:05.912493   60008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:05.912563   60008 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:05.912614   60008 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:05.912673   60008 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:05.912726   60008 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:05.912809   60008 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:05.912874   60008 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:05.912975   60008 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:05.913082   60008 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:05.913142   60008 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:05.913197   60008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:05.913258   60008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:05.913363   60008 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:05.913439   60008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:05.913536   60008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:05.913616   60008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:05.913738   60008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:05.913841   60008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:05.915394   60008 out.go:204]   - Booting up control plane ...
	I0319 20:41:05.915486   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:05.915589   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:05.915682   60008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:05.915832   60008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:05.915951   60008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:05.916010   60008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:05.916154   60008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:05.916255   60008 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.505433 seconds
	I0319 20:41:05.916392   60008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:05.916545   60008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:05.916628   60008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:05.916839   60008 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-385240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:05.916908   60008 kubeadm.go:309] [bootstrap-token] Using token: y9pq78.ls188thm3dr5dool
	I0319 20:41:05.918444   60008 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:05.918567   60008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:05.918654   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:05.918821   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:05.918999   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:05.919147   60008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:05.919260   60008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:05.919429   60008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:05.919498   60008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:05.919572   60008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:05.919582   60008 kubeadm.go:309] 
	I0319 20:41:05.919665   60008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:05.919678   60008 kubeadm.go:309] 
	I0319 20:41:05.919787   60008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:05.919799   60008 kubeadm.go:309] 
	I0319 20:41:05.919834   60008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:05.919929   60008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:05.920007   60008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:05.920017   60008 kubeadm.go:309] 
	I0319 20:41:05.920102   60008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:05.920112   60008 kubeadm.go:309] 
	I0319 20:41:05.920182   60008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:05.920191   60008 kubeadm.go:309] 
	I0319 20:41:05.920284   60008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:05.920411   60008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:05.920506   60008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:05.920520   60008 kubeadm.go:309] 
	I0319 20:41:05.920648   60008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:05.920762   60008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:05.920771   60008 kubeadm.go:309] 
	I0319 20:41:05.920901   60008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921063   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:05.921099   60008 kubeadm.go:309] 	--control-plane 
	I0319 20:41:05.921105   60008 kubeadm.go:309] 
	I0319 20:41:05.921207   60008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:05.921216   60008 kubeadm.go:309] 
	I0319 20:41:05.921285   60008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token y9pq78.ls188thm3dr5dool \
	I0319 20:41:05.921386   60008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:05.921397   60008 cni.go:84] Creating CNI manager for ""
	I0319 20:41:05.921403   60008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:05.922921   60008 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:05.924221   60008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:05.941888   60008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:06.040294   60008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:06.040378   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.040413   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-385240 minikube.k8s.io/updated_at=2024_03_19T20_41_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=default-k8s-diff-port-385240 minikube.k8s.io/primary=true
	I0319 20:41:06.104038   60008 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:06.266168   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:06.766345   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.266622   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.766418   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.266864   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:08.766777   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.266420   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:09.766319   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:10.266990   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:07.568473   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:07.568751   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:10.766714   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.266839   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:11.767222   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.266933   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:12.766390   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.266562   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:13.766618   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.267159   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:14.767010   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.266307   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:15.767002   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.266488   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:16.766567   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.266789   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:17.766935   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.266312   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.767202   60008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:18.904766   60008 kubeadm.go:1107] duration metric: took 12.864451937s to wait for elevateKubeSystemPrivileges
	W0319 20:41:18.904802   60008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:18.904810   60008 kubeadm.go:393] duration metric: took 5m15.275720912s to StartCluster
	I0319 20:41:18.904826   60008 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.904910   60008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:18.906545   60008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:18.906817   60008 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.77 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:18.908538   60008 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:18.906944   60008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:18.907019   60008 config.go:182] Loaded profile config "default-k8s-diff-port-385240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:41:18.910084   60008 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910095   60008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:18.910100   60008 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-385240"
	I0319 20:41:18.910125   60008 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:18.910135   60008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-385240"
	W0319 20:41:18.910141   60008 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:18.910255   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910127   60008 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.910313   60008 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:18.910334   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.910603   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910635   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910647   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910667   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.910692   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.910671   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.927094   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0319 20:41:18.927240   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0319 20:41:18.927517   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.927620   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928036   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928059   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928074   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0319 20:41:18.928331   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.928360   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.928492   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928538   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.928737   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.928993   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.929009   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.929046   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.929066   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929108   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.929338   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.929862   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.929893   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.932815   60008 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-385240"
	W0319 20:41:18.932838   60008 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:18.932865   60008 host.go:66] Checking if "default-k8s-diff-port-385240" exists ...
	I0319 20:41:18.933211   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.933241   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.945888   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0319 20:41:18.946351   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.946842   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.946869   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.947426   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.947600   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.947808   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0319 20:41:18.948220   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.948367   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0319 20:41:18.948739   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.948753   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.949222   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.949277   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.951252   60008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:18.949736   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.950173   60008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:18.951720   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.952838   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.952813   60008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:18.952917   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:18.952934   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.952815   60008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:18.953264   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.953460   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.955228   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.957199   60008 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:18.958698   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:18.958715   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:18.958733   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.956502   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.957073   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.958806   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.958845   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.959306   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.959485   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.959783   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.961410   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961775   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.961802   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.961893   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.962065   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.962213   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.962369   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:18.975560   60008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0319 20:41:18.976026   60008 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:18.976503   60008 main.go:141] libmachine: Using API Version  1
	I0319 20:41:18.976524   60008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:18.976893   60008 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:18.977128   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetState
	I0319 20:41:18.978582   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .DriverName
	I0319 20:41:18.978862   60008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:18.978881   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:18.978898   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHHostname
	I0319 20:41:18.981356   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981730   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:fd:f0", ip: ""} in network mk-default-k8s-diff-port-385240: {Iface:virbr1 ExpiryTime:2024-03-19 21:35:49 +0000 UTC Type:0 Mac:52:54:00:46:fd:f0 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:default-k8s-diff-port-385240 Clientid:01:52:54:00:46:fd:f0}
	I0319 20:41:18.981762   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | domain default-k8s-diff-port-385240 has defined IP address 192.168.39.77 and MAC address 52:54:00:46:fd:f0 in network mk-default-k8s-diff-port-385240
	I0319 20:41:18.981875   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHPort
	I0319 20:41:18.982056   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHKeyPath
	I0319 20:41:18.982192   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .GetSSHUsername
	I0319 20:41:18.982337   60008 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/default-k8s-diff-port-385240/id_rsa Username:docker}
	I0319 20:41:19.126985   60008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:19.188792   60008 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198961   60008 node_ready.go:49] node "default-k8s-diff-port-385240" has status "Ready":"True"
	I0319 20:41:19.198981   60008 node_ready.go:38] duration metric: took 10.160382ms for node "default-k8s-diff-port-385240" to be "Ready" ...
	I0319 20:41:19.198992   60008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:19.209346   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:19.335212   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:19.414291   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:19.506570   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:19.506590   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:19.651892   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:19.651916   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:19.808237   60008 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:19.808282   60008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:19.924353   60008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:20.583635   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.169310347s)
	I0319 20:41:20.583700   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.583717   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.583981   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.583991   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.584015   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.584027   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.584253   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.584282   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585518   60008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250274289s)
	I0319 20:41:20.585568   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585584   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.585855   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.585879   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.585888   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.585902   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.585916   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.586162   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.586168   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.586177   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.609166   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.609183   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.609453   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.609492   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.609502   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.750409   60008 pod_ready.go:92] pod "coredns-76f75df574-4rq6h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:20.750433   60008 pod_ready.go:81] duration metric: took 1.541065393s for pod "coredns-76f75df574-4rq6h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.750442   60008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:20.869692   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.869719   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.869995   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) DBG | Closing plugin on server side
	I0319 20:41:20.870000   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870025   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870045   60008 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:20.870057   60008 main.go:141] libmachine: (default-k8s-diff-port-385240) Calling .Close
	I0319 20:41:20.870336   60008 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:20.870352   60008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:20.870366   60008 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-385240"
	I0319 20:41:20.872093   60008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0319 20:41:20.873465   60008 addons.go:505] duration metric: took 1.966520277s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0319 20:41:21.260509   60008 pod_ready.go:92] pod "coredns-76f75df574-swxdt" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.260533   60008 pod_ready.go:81] duration metric: took 510.083899ms for pod "coredns-76f75df574-swxdt" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.260543   60008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268298   60008 pod_ready.go:92] pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.268324   60008 pod_ready.go:81] duration metric: took 7.772878ms for pod "etcd-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.268335   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274436   60008 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.274461   60008 pod_ready.go:81] duration metric: took 6.117464ms for pod "kube-apiserver-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.274472   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281324   60008 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.281347   60008 pod_ready.go:81] duration metric: took 6.866088ms for pod "kube-controller-manager-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.281367   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.593980   60008 pod_ready.go:92] pod "kube-proxy-j7ghm" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.594001   60008 pod_ready.go:81] duration metric: took 312.62702ms for pod "kube-proxy-j7ghm" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.594009   60008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993321   60008 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:21.993346   60008 pod_ready.go:81] duration metric: took 399.330556ms for pod "kube-scheduler-default-k8s-diff-port-385240" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:21.993362   60008 pod_ready.go:38] duration metric: took 2.794359581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:21.993375   60008 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:21.993423   60008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:22.010583   60008 api_server.go:72] duration metric: took 3.10372573s to wait for apiserver process to appear ...
	I0319 20:41:22.010609   60008 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:22.010629   60008 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8444/healthz ...
	I0319 20:41:22.015218   60008 api_server.go:279] https://192.168.39.77:8444/healthz returned 200:
	ok
	I0319 20:41:22.016276   60008 api_server.go:141] control plane version: v1.29.3
	I0319 20:41:22.016291   60008 api_server.go:131] duration metric: took 5.6763ms to wait for apiserver health ...
	I0319 20:41:22.016298   60008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:22.197418   60008 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:22.197454   60008 system_pods.go:61] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.197460   60008 system_pods.go:61] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.197465   60008 system_pods.go:61] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.197470   60008 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.197476   60008 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.197481   60008 system_pods.go:61] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.197485   60008 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.197494   60008 system_pods.go:61] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.197500   60008 system_pods.go:61] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.197514   60008 system_pods.go:74] duration metric: took 181.210964ms to wait for pod list to return data ...
	I0319 20:41:22.197526   60008 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:22.392702   60008 default_sa.go:45] found service account: "default"
	I0319 20:41:22.392738   60008 default_sa.go:55] duration metric: took 195.195704ms for default service account to be created ...
	I0319 20:41:22.392751   60008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:22.595946   60008 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:22.595975   60008 system_pods.go:89] "coredns-76f75df574-4rq6h" [97f3ed0d-0300-4f53-bead-79ccbd6d17c0] Running
	I0319 20:41:22.595980   60008 system_pods.go:89] "coredns-76f75df574-swxdt" [3ae5aa99-e1a7-4fe4-bbc9-9f88f0b320d4] Running
	I0319 20:41:22.595985   60008 system_pods.go:89] "etcd-default-k8s-diff-port-385240" [3539908a-7354-4e37-960d-de2d2491e5a1] Running
	I0319 20:41:22.595991   60008 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-385240" [2bbf2343-33e5-446c-a2d4-50a4013f35e3] Running
	I0319 20:41:22.595996   60008 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-385240" [1562e9c9-cd2f-4928-ac5f-cb34bd7e5fbe] Running
	I0319 20:41:22.596006   60008 system_pods.go:89] "kube-proxy-j7ghm" [95092d52-b83c-4c36-81b2-cd3875cf0724] Running
	I0319 20:41:22.596010   60008 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-385240" [d092f295-0799-4bf6-9a0a-a5139e525f7b] Running
	I0319 20:41:22.596016   60008 system_pods.go:89] "metrics-server-57f55c9bc5-nv288" [17b4b56d-bbde-4dbf-8441-bbaee4f8ded5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:22.596022   60008 system_pods.go:89] "storage-provisioner" [b314e502-0cf6-497c-9129-8eae14086712] Running
	I0319 20:41:22.596034   60008 system_pods.go:126] duration metric: took 203.277741ms to wait for k8s-apps to be running ...
	I0319 20:41:22.596043   60008 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:22.596087   60008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:22.615372   60008 system_svc.go:56] duration metric: took 19.319488ms WaitForService to wait for kubelet
	I0319 20:41:22.615396   60008 kubeadm.go:576] duration metric: took 3.708546167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:22.615413   60008 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:22.793277   60008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:22.793303   60008 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:22.793313   60008 node_conditions.go:105] duration metric: took 177.89499ms to run NodePressure ...
	I0319 20:41:22.793325   60008 start.go:240] waiting for startup goroutines ...
	I0319 20:41:22.793331   60008 start.go:245] waiting for cluster config update ...
	I0319 20:41:22.793342   60008 start.go:254] writing updated cluster config ...
	I0319 20:41:22.793598   60008 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:22.845339   60008 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0319 20:41:22.847429   60008 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-385240" cluster and "default" namespace by default
	I0319 20:41:29.064044   59019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.598411816s)
	I0319 20:41:29.064115   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:29.082924   59019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 20:41:29.095050   59019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:29.106905   59019 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:29.106918   59019 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:29.106962   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:29.118153   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:29.118209   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:29.128632   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:29.140341   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:29.140401   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:29.151723   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.162305   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:29.162365   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:29.173654   59019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:29.185155   59019 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:29.185211   59019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:29.196015   59019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:29.260934   59019 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0319 20:41:29.261054   59019 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:29.412424   59019 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:29.412592   59019 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:29.412759   59019 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:29.636019   59019 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:29.638046   59019 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:29.638158   59019 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:29.638216   59019 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:29.638279   59019 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:29.638331   59019 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:29.645456   59019 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:29.645553   59019 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:29.645610   59019 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:29.645663   59019 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:29.645725   59019 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:29.645788   59019 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:29.645822   59019 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:29.645869   59019 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:29.895850   59019 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:30.248635   59019 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 20:41:30.380474   59019 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:30.457908   59019 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:30.585194   59019 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:30.585852   59019 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:30.588394   59019 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:30.590147   59019 out.go:204]   - Booting up control plane ...
	I0319 20:41:30.590241   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:30.590353   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:30.590606   59019 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:30.611645   59019 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:30.614010   59019 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:30.614266   59019 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:30.757838   59019 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 20:41:30.757973   59019 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0319 20:41:31.758717   59019 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001332477s
	I0319 20:41:31.758819   59019 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 20:41:37.261282   59019 kubeadm.go:309] [api-check] The API server is healthy after 5.50238s
	I0319 20:41:37.275017   59019 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 20:41:37.299605   59019 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 20:41:37.335190   59019 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 20:41:37.335449   59019 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-414130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 20:41:37.350882   59019 kubeadm.go:309] [bootstrap-token] Using token: 0euy3c.pb7fih13u47u7k5a
	I0319 20:41:37.352692   59019 out.go:204]   - Configuring RBAC rules ...
	I0319 20:41:37.352796   59019 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 20:41:37.357551   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 20:41:37.365951   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 20:41:37.369544   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 20:41:37.376066   59019 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 20:41:37.379284   59019 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 20:41:37.669667   59019 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 20:41:38.120423   59019 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0319 20:41:38.668937   59019 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0319 20:41:38.670130   59019 kubeadm.go:309] 
	I0319 20:41:38.670236   59019 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0319 20:41:38.670251   59019 kubeadm.go:309] 
	I0319 20:41:38.670339   59019 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0319 20:41:38.670348   59019 kubeadm.go:309] 
	I0319 20:41:38.670369   59019 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0319 20:41:38.670451   59019 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 20:41:38.670520   59019 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 20:41:38.670530   59019 kubeadm.go:309] 
	I0319 20:41:38.670641   59019 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0319 20:41:38.670653   59019 kubeadm.go:309] 
	I0319 20:41:38.670720   59019 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 20:41:38.670731   59019 kubeadm.go:309] 
	I0319 20:41:38.670802   59019 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0319 20:41:38.670916   59019 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 20:41:38.671036   59019 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 20:41:38.671053   59019 kubeadm.go:309] 
	I0319 20:41:38.671185   59019 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 20:41:38.671332   59019 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0319 20:41:38.671351   59019 kubeadm.go:309] 
	I0319 20:41:38.671438   59019 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671588   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 \
	I0319 20:41:38.671609   59019 kubeadm.go:309] 	--control-plane 
	I0319 20:41:38.671613   59019 kubeadm.go:309] 
	I0319 20:41:38.671684   59019 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0319 20:41:38.671693   59019 kubeadm.go:309] 
	I0319 20:41:38.671758   59019 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0euy3c.pb7fih13u47u7k5a \
	I0319 20:41:38.671877   59019 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:879deda827233336560e3cc326bc0644668fcac16920bfef16156af7e2246fd7 
	I0319 20:41:38.672172   59019 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:38.672197   59019 cni.go:84] Creating CNI manager for ""
	I0319 20:41:38.672212   59019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 20:41:38.674158   59019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0319 20:41:38.675618   59019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0319 20:41:38.690458   59019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0319 20:41:38.712520   59019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 20:41:38.712597   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:38.712616   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-414130 minikube.k8s.io/updated_at=2024_03_19T20_41_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=57e541620fb53b674b9a0dc316348b0ab51a75ce minikube.k8s.io/name=no-preload-414130 minikube.k8s.io/primary=true
	I0319 20:41:38.902263   59019 ops.go:34] apiserver oom_adj: -16
	I0319 20:41:38.902364   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.403054   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:39.903127   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.402786   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:40.903358   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.403414   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:41.902829   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.402506   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:42.903338   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.402784   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:43.902477   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.403152   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:44.903190   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.402544   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:45.903397   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:46.402785   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.570267   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:41:47.570544   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:41:47.570561   59621 kubeadm.go:309] 
	I0319 20:41:47.570624   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:41:47.570682   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:41:47.570691   59621 kubeadm.go:309] 
	I0319 20:41:47.570745   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:41:47.570793   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:41:47.570954   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:41:47.570978   59621 kubeadm.go:309] 
	I0319 20:41:47.571116   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:41:47.571164   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:41:47.571203   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:41:47.571210   59621 kubeadm.go:309] 
	I0319 20:41:47.571354   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:41:47.571463   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:41:47.571476   59621 kubeadm.go:309] 
	I0319 20:41:47.571612   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:41:47.571737   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:41:47.571835   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:41:47.571933   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:41:47.571945   59621 kubeadm.go:309] 
	I0319 20:41:47.572734   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:41:47.572851   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:41:47.572942   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0319 20:41:47.573079   59621 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0319 20:41:47.573148   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0319 20:41:48.833717   59621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.260539571s)
	I0319 20:41:48.833792   59621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:48.851716   59621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 20:41:48.865583   59621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 20:41:48.865611   59621 kubeadm.go:156] found existing configuration files:
	
	I0319 20:41:48.865662   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 20:41:48.877524   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 20:41:48.877608   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 20:41:48.888941   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 20:41:48.900526   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 20:41:48.900590   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 20:41:48.912082   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.924155   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 20:41:48.924209   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 20:41:48.936425   59621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 20:41:48.947451   59621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 20:41:48.947515   59621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 20:41:48.960003   59621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0319 20:41:49.040921   59621 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0319 20:41:49.041012   59621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0319 20:41:49.201676   59621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 20:41:49.201814   59621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 20:41:49.201937   59621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0319 20:41:49.416333   59621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 20:41:49.418033   59621 out.go:204]   - Generating certificates and keys ...
	I0319 20:41:49.418144   59621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0319 20:41:49.418225   59621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0319 20:41:49.418370   59621 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0319 20:41:49.418464   59621 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0319 20:41:49.418555   59621 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0319 20:41:49.418632   59621 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0319 20:41:49.418713   59621 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0319 20:41:49.418799   59621 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0319 20:41:49.419157   59621 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0319 20:41:49.419709   59621 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0319 20:41:49.419799   59621 kubeadm.go:309] [certs] Using the existing "sa" key
	I0319 20:41:49.419914   59621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 20:41:49.687633   59621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 20:41:49.937984   59621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 20:41:50.018670   59621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 20:41:50.231561   59621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 20:41:50.250617   59621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 20:41:50.251763   59621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 20:41:50.251841   59621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0319 20:41:50.426359   59621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 20:41:50.428067   59621 out.go:204]   - Booting up control plane ...
	I0319 20:41:50.428199   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 20:41:50.429268   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 20:41:50.430689   59621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 20:41:50.431815   59621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 20:41:50.435041   59621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0319 20:41:46.902656   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.402845   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:47.903436   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.402511   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:48.903073   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.402559   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:49.902914   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.402708   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:50.903441   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.403416   59019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 20:41:51.585670   59019 kubeadm.go:1107] duration metric: took 12.873132825s to wait for elevateKubeSystemPrivileges
	W0319 20:41:51.585714   59019 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0319 20:41:51.585724   59019 kubeadm.go:393] duration metric: took 5m12.093644869s to StartCluster
	I0319 20:41:51.585744   59019 settings.go:142] acquiring lock: {Name:mk47bd411616336d513428143c7512bf6af40e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.585835   59019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:41:51.588306   59019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/kubeconfig: {Name:mk47d0e85ac507119093d80f6195bf47489d840b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 20:41:51.588634   59019 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.29 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 20:41:51.590331   59019 out.go:177] * Verifying Kubernetes components...
	I0319 20:41:51.588755   59019 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0319 20:41:51.588891   59019 config.go:182] Loaded profile config "no-preload-414130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0319 20:41:51.590430   59019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-414130"
	I0319 20:41:51.591988   59019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 20:41:51.592020   59019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-414130"
	W0319 20:41:51.592038   59019 addons.go:243] addon storage-provisioner should already be in state true
	I0319 20:41:51.592069   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.590437   59019 addons.go:69] Setting default-storageclass=true in profile "no-preload-414130"
	I0319 20:41:51.590441   59019 addons.go:69] Setting metrics-server=true in profile "no-preload-414130"
	I0319 20:41:51.592098   59019 addons.go:234] Setting addon metrics-server=true in "no-preload-414130"
	W0319 20:41:51.592114   59019 addons.go:243] addon metrics-server should already be in state true
	I0319 20:41:51.592129   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.592164   59019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-414130"
	I0319 20:41:51.592450   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592479   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592505   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592532   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.592552   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.608909   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0319 20:41:51.609383   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.609942   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.609962   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.610565   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.610774   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.612725   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0319 20:41:51.612794   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0319 20:41:51.613141   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.613637   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.613660   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.614121   59019 addons.go:234] Setting addon default-storageclass=true in "no-preload-414130"
	W0319 20:41:51.614139   59019 addons.go:243] addon default-storageclass should already be in state true
	I0319 20:41:51.614167   59019 host.go:66] Checking if "no-preload-414130" exists ...
	I0319 20:41:51.614214   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.614482   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614512   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614774   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.614810   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.614876   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.615336   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.615369   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.615703   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.616237   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.616281   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.630175   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0319 20:41:51.630802   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.631279   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.631296   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.631645   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.632322   59019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:41:51.632356   59019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:41:51.634429   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0319 20:41:51.634865   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.635311   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.635324   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.635922   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.636075   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.637997   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.640025   59019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 20:41:51.641428   59019 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:51.641445   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 20:41:51.641462   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.644316   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644838   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.644853   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.644875   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0319 20:41:51.645162   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.645300   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.645365   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.645499   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.645613   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.645964   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.645976   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.646447   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.646663   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.648174   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.649872   59019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 20:41:51.651152   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 20:41:51.651177   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 20:41:51.651197   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.654111   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654523   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.654545   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.654792   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.654987   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.655156   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.655281   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.656648   59019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0319 20:41:51.656960   59019 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:41:51.657457   59019 main.go:141] libmachine: Using API Version  1
	I0319 20:41:51.657471   59019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:41:51.657751   59019 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:41:51.657948   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetState
	I0319 20:41:51.659265   59019 main.go:141] libmachine: (no-preload-414130) Calling .DriverName
	I0319 20:41:51.659503   59019 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:51.659517   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 20:41:51.659533   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHHostname
	I0319 20:41:51.662039   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662427   59019 main.go:141] libmachine: (no-preload-414130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f0:55", ip: ""} in network mk-no-preload-414130: {Iface:virbr4 ExpiryTime:2024-03-19 21:36:09 +0000 UTC Type:0 Mac:52:54:00:f0:f0:55 Iaid: IPaddr:192.168.72.29 Prefix:24 Hostname:no-preload-414130 Clientid:01:52:54:00:f0:f0:55}
	I0319 20:41:51.662447   59019 main.go:141] libmachine: (no-preload-414130) DBG | domain no-preload-414130 has defined IP address 192.168.72.29 and MAC address 52:54:00:f0:f0:55 in network mk-no-preload-414130
	I0319 20:41:51.662583   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHPort
	I0319 20:41:51.662757   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHKeyPath
	I0319 20:41:51.662879   59019 main.go:141] libmachine: (no-preload-414130) Calling .GetSSHUsername
	I0319 20:41:51.662991   59019 sshutil.go:53] new ssh client: &{IP:192.168.72.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/no-preload-414130/id_rsa Username:docker}
	I0319 20:41:51.845584   59019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 20:41:51.876597   59019 node_ready.go:35] waiting up to 6m0s for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886290   59019 node_ready.go:49] node "no-preload-414130" has status "Ready":"True"
	I0319 20:41:51.886308   59019 node_ready.go:38] duration metric: took 9.684309ms for node "no-preload-414130" to be "Ready" ...
	I0319 20:41:51.886315   59019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:51.893456   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:51.976850   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 20:41:52.031123   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 20:41:52.031144   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 20:41:52.133184   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 20:41:52.195945   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 20:41:52.195968   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 20:41:52.270721   59019 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.270745   59019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 20:41:52.407604   59019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 20:41:52.578113   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578140   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578511   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578524   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.578532   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.578557   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.578566   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.578809   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.578828   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:52.610849   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:52.610873   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:52.611246   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:52.611251   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:52.611269   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.342742   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.209525982s)
	I0319 20:41:53.342797   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.342808   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343131   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343159   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343163   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.343174   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.343194   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.343486   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.343503   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.343525   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.450430   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.450458   59019 pod_ready.go:81] duration metric: took 1.556981953s for pod "coredns-7db6d8ff4d-jm8cl" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.450478   59019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459425   59019 pod_ready.go:92] pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.459454   59019 pod_ready.go:81] duration metric: took 8.967211ms for pod "coredns-7db6d8ff4d-jtdrs" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.459467   59019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495144   59019 pod_ready.go:92] pod "etcd-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.495164   59019 pod_ready.go:81] duration metric: took 35.690498ms for pod "etcd-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.495173   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520382   59019 pod_ready.go:92] pod "kube-apiserver-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.520412   59019 pod_ready.go:81] duration metric: took 25.23062ms for pod "kube-apiserver-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.520426   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530859   59019 pod_ready.go:92] pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.530889   59019 pod_ready.go:81] duration metric: took 10.451233ms for pod "kube-controller-manager-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.530903   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.545946   59019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.13830463s)
	I0319 20:41:53.545994   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546009   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546304   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546323   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546333   59019 main.go:141] libmachine: Making call to close driver server
	I0319 20:41:53.546350   59019 main.go:141] libmachine: (no-preload-414130) Calling .Close
	I0319 20:41:53.546678   59019 main.go:141] libmachine: Successfully made call to close driver server
	I0319 20:41:53.546695   59019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0319 20:41:53.546706   59019 addons.go:470] Verifying addon metrics-server=true in "no-preload-414130"
	I0319 20:41:53.546764   59019 main.go:141] libmachine: (no-preload-414130) DBG | Closing plugin on server side
	I0319 20:41:53.548523   59019 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0319 20:41:53.549990   59019 addons.go:505] duration metric: took 1.961237309s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0319 20:41:53.881082   59019 pod_ready.go:92] pod "kube-proxy-m7m4h" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:53.881107   59019 pod_ready.go:81] duration metric: took 350.197776ms for pod "kube-proxy-m7m4h" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:53.881116   59019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283891   59019 pod_ready.go:92] pod "kube-scheduler-no-preload-414130" in "kube-system" namespace has status "Ready":"True"
	I0319 20:41:54.283924   59019 pod_ready.go:81] duration metric: took 402.800741ms for pod "kube-scheduler-no-preload-414130" in "kube-system" namespace to be "Ready" ...
	I0319 20:41:54.283936   59019 pod_ready.go:38] duration metric: took 2.397611991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 20:41:54.283953   59019 api_server.go:52] waiting for apiserver process to appear ...
	I0319 20:41:54.284016   59019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:41:54.304606   59019 api_server.go:72] duration metric: took 2.715931012s to wait for apiserver process to appear ...
	I0319 20:41:54.304629   59019 api_server.go:88] waiting for apiserver healthz status ...
	I0319 20:41:54.304651   59019 api_server.go:253] Checking apiserver healthz at https://192.168.72.29:8443/healthz ...
	I0319 20:41:54.309292   59019 api_server.go:279] https://192.168.72.29:8443/healthz returned 200:
	ok
	I0319 20:41:54.310195   59019 api_server.go:141] control plane version: v1.30.0-beta.0
	I0319 20:41:54.310215   59019 api_server.go:131] duration metric: took 5.579162ms to wait for apiserver health ...
	I0319 20:41:54.310225   59019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 20:41:54.488441   59019 system_pods.go:59] 9 kube-system pods found
	I0319 20:41:54.488475   59019 system_pods.go:61] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.488482   59019 system_pods.go:61] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.488487   59019 system_pods.go:61] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.488491   59019 system_pods.go:61] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.488496   59019 system_pods.go:61] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.488500   59019 system_pods.go:61] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.488505   59019 system_pods.go:61] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.488514   59019 system_pods.go:61] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.488520   59019 system_pods.go:61] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.488530   59019 system_pods.go:74] duration metric: took 178.298577ms to wait for pod list to return data ...
	I0319 20:41:54.488543   59019 default_sa.go:34] waiting for default service account to be created ...
	I0319 20:41:54.679537   59019 default_sa.go:45] found service account: "default"
	I0319 20:41:54.679560   59019 default_sa.go:55] duration metric: took 191.010696ms for default service account to be created ...
	I0319 20:41:54.679569   59019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 20:41:54.884163   59019 system_pods.go:86] 9 kube-system pods found
	I0319 20:41:54.884197   59019 system_pods.go:89] "coredns-7db6d8ff4d-jm8cl" [8c50b962-ed13-4511-8bef-2a2657f26276] Running
	I0319 20:41:54.884205   59019 system_pods.go:89] "coredns-7db6d8ff4d-jtdrs" [1199d0b5-8f7b-47ca-bdd4-af092b6150ca] Running
	I0319 20:41:54.884211   59019 system_pods.go:89] "etcd-no-preload-414130" [f5193538-7a5a-4130-b0a5-99307fa08c3d] Running
	I0319 20:41:54.884217   59019 system_pods.go:89] "kube-apiserver-no-preload-414130" [3f925dd3-aa40-4133-ad01-3e007db2f4e1] Running
	I0319 20:41:54.884223   59019 system_pods.go:89] "kube-controller-manager-no-preload-414130" [c3ef5184-1785-4593-99a5-81fa6b00002a] Running
	I0319 20:41:54.884230   59019 system_pods.go:89] "kube-proxy-m7m4h" [06239fd6-3053-4a7b-9a73-62886b59fa6a] Running
	I0319 20:41:54.884236   59019 system_pods.go:89] "kube-scheduler-no-preload-414130" [44a3d1b2-2bae-4034-951a-5e5c10d35080] Running
	I0319 20:41:54.884246   59019 system_pods.go:89] "metrics-server-569cc877fc-27n2b" [2fe034cc-d87f-410e-b1f7-e9e8cd3fc7e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0319 20:41:54.884268   59019 system_pods.go:89] "storage-provisioner" [6f9e4db1-704f-4e62-816c-c4e1a9e70ae5] Running
	I0319 20:41:54.884281   59019 system_pods.go:126] duration metric: took 204.70598ms to wait for k8s-apps to be running ...
	I0319 20:41:54.884294   59019 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 20:41:54.884348   59019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:41:54.901838   59019 system_svc.go:56] duration metric: took 17.536645ms WaitForService to wait for kubelet
	I0319 20:41:54.901869   59019 kubeadm.go:576] duration metric: took 3.313198534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 20:41:54.901887   59019 node_conditions.go:102] verifying NodePressure condition ...
	I0319 20:41:55.080463   59019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0319 20:41:55.080485   59019 node_conditions.go:123] node cpu capacity is 2
	I0319 20:41:55.080495   59019 node_conditions.go:105] duration metric: took 178.603035ms to run NodePressure ...
	I0319 20:41:55.080507   59019 start.go:240] waiting for startup goroutines ...
	I0319 20:41:55.080513   59019 start.go:245] waiting for cluster config update ...
	I0319 20:41:55.080523   59019 start.go:254] writing updated cluster config ...
	I0319 20:41:55.080753   59019 ssh_runner.go:195] Run: rm -f paused
	I0319 20:41:55.130477   59019 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0319 20:41:55.133906   59019 out.go:177] * Done! kubectl is now configured to use "no-preload-414130" cluster and "default" namespace by default
	I0319 20:42:30.437086   59621 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0319 20:42:30.437422   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:30.437622   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:35.438338   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:35.438692   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:42:45.439528   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:42:45.439739   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:05.440809   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:05.441065   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441275   59621 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0319 20:43:45.441576   59621 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0319 20:43:45.441641   59621 kubeadm.go:309] 
	I0319 20:43:45.441736   59621 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0319 20:43:45.442100   59621 kubeadm.go:309] 		timed out waiting for the condition
	I0319 20:43:45.442116   59621 kubeadm.go:309] 
	I0319 20:43:45.442178   59621 kubeadm.go:309] 	This error is likely caused by:
	I0319 20:43:45.442258   59621 kubeadm.go:309] 		- The kubelet is not running
	I0319 20:43:45.442408   59621 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0319 20:43:45.442419   59621 kubeadm.go:309] 
	I0319 20:43:45.442553   59621 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0319 20:43:45.442593   59621 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0319 20:43:45.442639   59621 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0319 20:43:45.442649   59621 kubeadm.go:309] 
	I0319 20:43:45.442771   59621 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0319 20:43:45.442876   59621 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0319 20:43:45.442887   59621 kubeadm.go:309] 
	I0319 20:43:45.443021   59621 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0319 20:43:45.443129   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0319 20:43:45.443227   59621 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0319 20:43:45.443292   59621 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0319 20:43:45.443299   59621 kubeadm.go:309] 
	I0319 20:43:45.444883   59621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 20:43:45.444989   59621 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0319 20:43:45.445071   59621 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0319 20:43:45.445156   59621 kubeadm.go:393] duration metric: took 8m0.192289219s to StartCluster
	I0319 20:43:45.445206   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 20:43:45.445277   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 20:43:45.496166   59621 cri.go:89] found id: ""
	I0319 20:43:45.496194   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.496205   59621 logs.go:278] No container was found matching "kube-apiserver"
	I0319 20:43:45.496212   59621 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 20:43:45.496294   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 20:43:45.558367   59621 cri.go:89] found id: ""
	I0319 20:43:45.558393   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.558402   59621 logs.go:278] No container was found matching "etcd"
	I0319 20:43:45.558407   59621 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 20:43:45.558453   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 20:43:45.609698   59621 cri.go:89] found id: ""
	I0319 20:43:45.609732   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.609744   59621 logs.go:278] No container was found matching "coredns"
	I0319 20:43:45.609751   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 20:43:45.609800   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 20:43:45.649175   59621 cri.go:89] found id: ""
	I0319 20:43:45.649201   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.649212   59621 logs.go:278] No container was found matching "kube-scheduler"
	I0319 20:43:45.649219   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 20:43:45.649283   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 20:43:45.694842   59621 cri.go:89] found id: ""
	I0319 20:43:45.694882   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.694893   59621 logs.go:278] No container was found matching "kube-proxy"
	I0319 20:43:45.694901   59621 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 20:43:45.694957   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 20:43:45.737915   59621 cri.go:89] found id: ""
	I0319 20:43:45.737943   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.737953   59621 logs.go:278] No container was found matching "kube-controller-manager"
	I0319 20:43:45.737960   59621 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 20:43:45.738019   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 20:43:45.780236   59621 cri.go:89] found id: ""
	I0319 20:43:45.780277   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.780289   59621 logs.go:278] No container was found matching "kindnet"
	I0319 20:43:45.780297   59621 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0319 20:43:45.780354   59621 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0319 20:43:45.820023   59621 cri.go:89] found id: ""
	I0319 20:43:45.820053   59621 logs.go:276] 0 containers: []
	W0319 20:43:45.820063   59621 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0319 20:43:45.820074   59621 logs.go:123] Gathering logs for kubelet ...
	I0319 20:43:45.820089   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0319 20:43:45.875070   59621 logs.go:123] Gathering logs for dmesg ...
	I0319 20:43:45.875107   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 20:43:45.891804   59621 logs.go:123] Gathering logs for describe nodes ...
	I0319 20:43:45.891831   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0319 20:43:45.977588   59621 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0319 20:43:45.977609   59621 logs.go:123] Gathering logs for CRI-O ...
	I0319 20:43:45.977624   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 20:43:46.083625   59621 logs.go:123] Gathering logs for container status ...
	I0319 20:43:46.083654   59621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0319 20:43:46.129458   59621 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0319 20:43:46.129509   59621 out.go:239] * 
	W0319 20:43:46.129569   59621 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.129599   59621 out.go:239] * 
	W0319 20:43:46.130743   59621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 20:43:46.134462   59621 out.go:177] 
	W0319 20:43:46.135751   59621 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0319 20:43:46.135817   59621 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0319 20:43:46.135849   59621 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0319 20:43:46.137404   59621 out.go:177] 
	
	
	==> CRI-O <==
	Mar 19 20:54:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:47.996626248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881687996588260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eee68440-2e2b-4b92-9171-a55feac812fd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:47.997551458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=735b2438-a585-4c24-8ea3-8c0841587c66 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:47.997601625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=735b2438-a585-4c24-8ea3-8c0841587c66 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:47 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:47.997634014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=735b2438-a585-4c24-8ea3-8c0841587c66 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.041092110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f6f9f5d-aee8-4e4a-99a8-b15a2bcafde6 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.041170412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f6f9f5d-aee8-4e4a-99a8-b15a2bcafde6 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.042865781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c3d6924-8cf8-4440-8005-2ce7b6e4efac name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.043240638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881688043221278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c3d6924-8cf8-4440-8005-2ce7b6e4efac name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.043892346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7adf03b-2f6c-4e65-8d05-cd190d2c1cde name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.043953121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7adf03b-2f6c-4e65-8d05-cd190d2c1cde name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.043996632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7adf03b-2f6c-4e65-8d05-cd190d2c1cde name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.078586977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8193e49-241e-47ab-939d-ae75726482e5 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.078663581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8193e49-241e-47ab-939d-ae75726482e5 name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.086916379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b500691d-f489-4a77-b2d0-128eaccc833d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.087283490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881688087259370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b500691d-f489-4a77-b2d0-128eaccc833d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.088216407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d628db24-75b1-42e7-8dab-2600e5941ade name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.088268315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d628db24-75b1-42e7-8dab-2600e5941ade name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.088300515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d628db24-75b1-42e7-8dab-2600e5941ade name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.123653005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f3b1be8-8d14-4e74-a245-1d994ddeaa5d name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.123728002Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f3b1be8-8d14-4e74-a245-1d994ddeaa5d name=/runtime.v1.RuntimeService/Version
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.124752949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3548627a-86d7-473a-82f1-300b01925243 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.125133016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710881688125110594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3548627a-86d7-473a-82f1-300b01925243 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.125798386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dbd9b0b-3331-44e5-80f1-9297eb03bea6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.125859405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dbd9b0b-3331-44e5-80f1-9297eb03bea6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 19 20:54:48 old-k8s-version-159022 crio[657]: time="2024-03-19 20:54:48.125891956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2dbd9b0b-3331-44e5-80f1-9297eb03bea6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar19 20:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055341] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752911] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.544871] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.711243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.190356] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.060609] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066334] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.201088] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.130943] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.285680] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +7.272629] systemd-fstab-generator[845]: Ignoring "noauto" option for root device
	[  +0.072227] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.223992] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +10.810145] kauditd_printk_skb: 46 callbacks suppressed
	[Mar19 20:39] systemd-fstab-generator[4992]: Ignoring "noauto" option for root device
	[Mar19 20:41] systemd-fstab-generator[5275]: Ignoring "noauto" option for root device
	[  +0.073912] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:54:48 up 19 min,  0 users,  load average: 0.06, 0.04, 0.04
	Linux old-k8s-version-159022 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: goroutine 141 [runnable]:
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00048e8c0)
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: goroutine 142 [select]:
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000105090, 0xc000c11201, 0xc000896900, 0xc0007f6630, 0xc000c2c700, 0xc000c2c6c0)
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c112c0, 0x0, 0x0)
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00048e8c0)
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 19 20:54:45 old-k8s-version-159022 kubelet[6731]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 19 20:54:45 old-k8s-version-159022 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 19 20:54:45 old-k8s-version-159022 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 19 20:54:46 old-k8s-version-159022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 134.
	Mar 19 20:54:46 old-k8s-version-159022 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 19 20:54:46 old-k8s-version-159022 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 19 20:54:46 old-k8s-version-159022 kubelet[6741]: I0319 20:54:46.522483    6741 server.go:416] Version: v1.20.0
	Mar 19 20:54:46 old-k8s-version-159022 kubelet[6741]: I0319 20:54:46.522867    6741 server.go:837] Client rotation is on, will bootstrap in background
	Mar 19 20:54:46 old-k8s-version-159022 kubelet[6741]: I0319 20:54:46.525279    6741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 19 20:54:46 old-k8s-version-159022 kubelet[6741]: W0319 20:54:46.526411    6741 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 19 20:54:46 old-k8s-version-159022 kubelet[6741]: I0319 20:54:46.526461    6741 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 2 (241.063018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-159022" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (116.67s)

                                                
                                    

Test pass (246/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 53.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 13.79
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.13
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.30.0-beta.0/json-events 47.24
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.55
31 TestOffline 103.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 212.34
38 TestAddons/parallel/Registry 26.68
40 TestAddons/parallel/InspektorGadget 11.9
41 TestAddons/parallel/MetricsServer 6.89
42 TestAddons/parallel/HelmTiller 12.14
44 TestAddons/parallel/CSI 54.55
45 TestAddons/parallel/Headlamp 25.95
46 TestAddons/parallel/CloudSpanner 6.06
47 TestAddons/parallel/LocalPath 22.53
48 TestAddons/parallel/NvidiaDevicePlugin 6.81
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 60.74
55 TestCertExpiration 297.16
57 TestForceSystemdFlag 71.87
58 TestForceSystemdEnv 72.33
60 TestKVMDriverInstallOrUpdate 4.5
64 TestErrorSpam/setup 44.04
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.64
68 TestErrorSpam/unpause 1.7
69 TestErrorSpam/stop 5.69
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 62.74
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.08
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
81 TestFunctional/serial/CacheCmd/cache/add_local 2.39
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 34.94
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.52
92 TestFunctional/serial/LogsFileCmd 1.53
93 TestFunctional/serial/InvalidService 4.23
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 21.41
97 TestFunctional/parallel/DryRun 0.29
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.79
103 TestFunctional/parallel/ServiceCmdConnect 26.57
104 TestFunctional/parallel/AddonsCmd 0.14
107 TestFunctional/parallel/SSHCmd 0.46
108 TestFunctional/parallel/CpCmd 1.4
109 TestFunctional/parallel/MySQL 25.42
110 TestFunctional/parallel/FileSync 0.26
111 TestFunctional/parallel/CertSync 1.41
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
119 TestFunctional/parallel/License 0.61
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
136 TestFunctional/parallel/ImageCommands/ImageBuild 6.57
137 TestFunctional/parallel/ImageCommands/Setup 2.13
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
139 TestFunctional/parallel/ProfileCmd/profile_list 0.28
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 8.06
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.68
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.18
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.7
145 TestFunctional/parallel/ServiceCmd/DeployApp 28.24
146 TestFunctional/parallel/ImageCommands/ImageRemove 1.14
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.5
148 TestFunctional/parallel/MountCmd/any-port 8.83
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.43
150 TestFunctional/parallel/MountCmd/specific-port 1.59
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
152 TestFunctional/parallel/Version/short 0.05
153 TestFunctional/parallel/Version/components 0.51
154 TestFunctional/parallel/ServiceCmd/List 1.24
155 TestFunctional/parallel/ServiceCmd/JSONOutput 1.23
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
157 TestFunctional/parallel/ServiceCmd/Format 0.28
158 TestFunctional/parallel/ServiceCmd/URL 0.29
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 226
166 TestMultiControlPlane/serial/DeployApp 7.24
167 TestMultiControlPlane/serial/PingHostFromPods 1.36
168 TestMultiControlPlane/serial/AddWorkerNode 47.25
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
171 TestMultiControlPlane/serial/CopyFile 13.63
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.44
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
184 TestJSONOutput/start/Command 96.88
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.76
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.71
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.37
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 91.99
216 TestMountStart/serial/StartWithMountFirst 27.7
217 TestMountStart/serial/VerifyMountFirst 0.39
218 TestMountStart/serial/StartWithMountSecond 28.73
219 TestMountStart/serial/VerifyMountSecond 0.39
220 TestMountStart/serial/DeleteFirst 0.69
221 TestMountStart/serial/VerifyMountPostDelete 0.39
222 TestMountStart/serial/Stop 2.29
223 TestMountStart/serial/RestartStopped 24.65
224 TestMountStart/serial/VerifyMountPostStop 0.39
227 TestMultiNode/serial/FreshStart2Nodes 107.36
228 TestMultiNode/serial/DeployApp2Nodes 5.94
229 TestMultiNode/serial/PingHostFrom2Pods 0.88
230 TestMultiNode/serial/AddNode 40.73
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.22
233 TestMultiNode/serial/CopyFile 7.44
234 TestMultiNode/serial/StopNode 2.51
235 TestMultiNode/serial/StartAfterStop 32.09
237 TestMultiNode/serial/DeleteNode 2.19
239 TestMultiNode/serial/RestartMultiNode 168.93
240 TestMultiNode/serial/ValidateNameConflict 45.18
247 TestScheduledStopUnix 117.17
251 TestRunningBinaryUpgrade 226.51
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/StartWithK8s 97.01
258 TestNoKubernetes/serial/StartWithStopK8s 8.42
259 TestStoppedBinaryUpgrade/Setup 2.57
260 TestNoKubernetes/serial/Start 57.3
261 TestStoppedBinaryUpgrade/Upgrade 103.65
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
263 TestNoKubernetes/serial/ProfileList 4.5
264 TestNoKubernetes/serial/Stop 1.6
265 TestNoKubernetes/serial/StartNoArgs 40.93
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
269 TestPause/serial/Start 63.72
284 TestNetworkPlugins/group/false 3.2
292 TestStartStop/group/no-preload/serial/FirstStart 157.42
294 TestStartStop/group/embed-certs/serial/FirstStart 98.31
295 TestStartStop/group/no-preload/serial/DeployApp 11.31
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
298 TestStartStop/group/embed-certs/serial/DeployApp 10.29
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.65
306 TestStartStop/group/no-preload/serial/SecondStart 698.81
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
311 TestStartStop/group/embed-certs/serial/SecondStart 534.72
312 TestStartStop/group/old-k8s-version/serial/Stop 1.53
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 502.69
326 TestStartStop/group/newest-cni/serial/FirstStart 61.76
327 TestNetworkPlugins/group/auto/Start 85.33
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
330 TestStartStop/group/newest-cni/serial/Stop 10.7
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
332 TestStartStop/group/newest-cni/serial/SecondStart 41.33
333 TestNetworkPlugins/group/auto/KubeletFlags 0.23
334 TestNetworkPlugins/group/auto/NetCatPod 11.27
335 TestNetworkPlugins/group/auto/DNS 0.2
336 TestNetworkPlugins/group/auto/Localhost 0.14
337 TestNetworkPlugins/group/auto/HairPin 0.19
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
341 TestNetworkPlugins/group/kindnet/Start 65.61
342 TestStartStop/group/newest-cni/serial/Pause 3.18
343 TestNetworkPlugins/group/calico/Start 114.72
344 TestNetworkPlugins/group/custom-flannel/Start 135.93
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
347 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
348 TestNetworkPlugins/group/kindnet/DNS 0.22
349 TestNetworkPlugins/group/kindnet/Localhost 0.16
350 TestNetworkPlugins/group/kindnet/HairPin 0.15
351 TestNetworkPlugins/group/enable-default-cni/Start 67.68
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.21
354 TestNetworkPlugins/group/calico/NetCatPod 12.24
355 TestNetworkPlugins/group/calico/DNS 0.2
356 TestNetworkPlugins/group/calico/Localhost 0.16
357 TestNetworkPlugins/group/calico/HairPin 0.2
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
360 TestNetworkPlugins/group/flannel/Start 84.22
361 TestNetworkPlugins/group/custom-flannel/DNS 0.23
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
364 TestNetworkPlugins/group/bridge/Start 123.7
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.56
367 TestNetworkPlugins/group/enable-default-cni/DNS 26.62
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
372 TestNetworkPlugins/group/flannel/NetCatPod 10.23
373 TestNetworkPlugins/group/flannel/DNS 0.16
374 TestNetworkPlugins/group/flannel/Localhost 0.14
375 TestNetworkPlugins/group/flannel/HairPin 0.14
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
377 TestNetworkPlugins/group/bridge/NetCatPod 11.24
378 TestNetworkPlugins/group/bridge/DNS 0.15
379 TestNetworkPlugins/group/bridge/Localhost 0.14
380 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (53.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-454018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-454018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.541040323s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (53.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-454018
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-454018: exit status 85 (67.942401ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-454018 | jenkins | v1.32.0 | 19 Mar 24 19:04 UTC |          |
	|         | -p download-only-454018        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:04:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:04:35.977542   17313 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:04:35.977783   17313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:04:35.977793   17313 out.go:304] Setting ErrFile to fd 2...
	I0319 19:04:35.977798   17313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:04:35.977987   17313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	W0319 19:04:35.978107   17313 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18453-10028/.minikube/config/config.json: open /home/jenkins/minikube-integration/18453-10028/.minikube/config/config.json: no such file or directory
	I0319 19:04:35.978650   17313 out.go:298] Setting JSON to true
	I0319 19:04:35.979504   17313 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2774,"bootTime":1710872302,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:04:35.979562   17313 start.go:139] virtualization: kvm guest
	I0319 19:04:35.982025   17313 out.go:97] [download-only-454018] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:04:35.983469   17313 out.go:169] MINIKUBE_LOCATION=18453
	W0319 19:04:35.982141   17313 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball: no such file or directory
	I0319 19:04:35.982195   17313 notify.go:220] Checking for updates...
	I0319 19:04:35.986311   17313 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:04:35.987792   17313 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:04:35.989263   17313 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:04:35.990556   17313 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0319 19:04:35.992827   17313 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 19:04:35.993060   17313 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:04:36.090077   17313 out.go:97] Using the kvm2 driver based on user configuration
	I0319 19:04:36.090099   17313 start.go:297] selected driver: kvm2
	I0319 19:04:36.090105   17313 start.go:901] validating driver "kvm2" against <nil>
	I0319 19:04:36.090407   17313 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:04:36.090509   17313 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:04:36.105021   17313 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:04:36.105065   17313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 19:04:36.105550   17313 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0319 19:04:36.105703   17313 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 19:04:36.105760   17313 cni.go:84] Creating CNI manager for ""
	I0319 19:04:36.105792   17313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:04:36.105801   17313 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 19:04:36.105845   17313 start.go:340] cluster config:
	{Name:download-only-454018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-454018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:04:36.106022   17313 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:04:36.107969   17313 out.go:97] Downloading VM boot image ...
	I0319 19:04:36.107999   17313 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0319 19:04:45.897861   17313 out.go:97] Starting "download-only-454018" primary control-plane node in "download-only-454018" cluster
	I0319 19:04:45.897899   17313 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 19:04:46.010334   17313 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0319 19:04:46.010369   17313 cache.go:56] Caching tarball of preloaded images
	I0319 19:04:46.010514   17313 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 19:04:46.012459   17313 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0319 19:04:46.012476   17313 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:04:46.128499   17313 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0319 19:05:00.431732   17313 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:05:00.431828   17313 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:05:01.332132   17313 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0319 19:05:01.332470   17313 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/download-only-454018/config.json ...
	I0319 19:05:01.332498   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/download-only-454018/config.json: {Name:mk0080391a85165cfef5ff43ae80e0eecdd3b53d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:05:01.332665   17313 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 19:05:01.332868   17313 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-454018 host does not exist
	  To start a cluster, run: "minikube start -p download-only-454018"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-454018
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (13.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-031263 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.788950205s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (13.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-031263
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-031263: exit status 85 (66.997944ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-454018 | jenkins | v1.32.0 | 19 Mar 24 19:04 UTC |                     |
	|         | -p download-only-454018        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC | 19 Mar 24 19:05 UTC |
	| delete  | -p download-only-454018        | download-only-454018 | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC | 19 Mar 24 19:05 UTC |
	| start   | -o=json --download-only        | download-only-031263 | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC |                     |
	|         | -p download-only-031263        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:05:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:05:29.854630   17580 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:05:29.854743   17580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:05:29.854749   17580 out.go:304] Setting ErrFile to fd 2...
	I0319 19:05:29.854755   17580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:05:29.854944   17580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:05:29.855476   17580 out.go:298] Setting JSON to true
	I0319 19:05:29.856367   17580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2828,"bootTime":1710872302,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:05:29.856427   17580 start.go:139] virtualization: kvm guest
	I0319 19:05:29.858801   17580 out.go:97] [download-only-031263] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:05:29.860440   17580 out.go:169] MINIKUBE_LOCATION=18453
	I0319 19:05:29.858997   17580 notify.go:220] Checking for updates...
	I0319 19:05:29.863176   17580 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:05:29.864630   17580 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:05:29.865944   17580 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:05:29.867289   17580 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0319 19:05:29.869805   17580 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 19:05:29.870006   17580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:05:29.900651   17580 out.go:97] Using the kvm2 driver based on user configuration
	I0319 19:05:29.900671   17580 start.go:297] selected driver: kvm2
	I0319 19:05:29.900676   17580 start.go:901] validating driver "kvm2" against <nil>
	I0319 19:05:29.901015   17580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:05:29.901080   17580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:05:29.915145   17580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:05:29.915182   17580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 19:05:29.915635   17580 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0319 19:05:29.915760   17580 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 19:05:29.915816   17580 cni.go:84] Creating CNI manager for ""
	I0319 19:05:29.915828   17580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:05:29.915835   17580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 19:05:29.915878   17580 start.go:340] cluster config:
	{Name:download-only-031263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-031263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:05:29.915970   17580 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:05:29.917685   17580 out.go:97] Starting "download-only-031263" primary control-plane node in "download-only-031263" cluster
	I0319 19:05:29.917699   17580 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:05:30.416543   17580 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0319 19:05:30.416576   17580 cache.go:56] Caching tarball of preloaded images
	I0319 19:05:30.416727   17580 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0319 19:05:30.418782   17580 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0319 19:05:30.418794   17580 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:05:30.608858   17580 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f4e94cb6232b24c3932ab20b1ee6dad -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-031263 host does not exist
	  To start a cluster, run: "minikube start -p download-only-031263"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-031263
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (47.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-516738 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-516738 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (47.237111785s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (47.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-516738
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-516738: exit status 85 (68.82382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-454018 | jenkins | v1.32.0 | 19 Mar 24 19:04 UTC |                     |
	|         | -p download-only-454018             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC | 19 Mar 24 19:05 UTC |
	| delete  | -p download-only-454018             | download-only-454018 | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC | 19 Mar 24 19:05 UTC |
	| start   | -o=json --download-only             | download-only-031263 | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC |                     |
	|         | -p download-only-031263             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC | 19 Mar 24 19:05 UTC |
	| delete  | -p download-only-031263             | download-only-031263 | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC | 19 Mar 24 19:05 UTC |
	| start   | -o=json --download-only             | download-only-516738 | jenkins | v1.32.0 | 19 Mar 24 19:05 UTC |                     |
	|         | -p download-only-516738             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 19:05:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:05:43.970221   17760 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:05:43.970368   17760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:05:43.970379   17760 out.go:304] Setting ErrFile to fd 2...
	I0319 19:05:43.970386   17760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:05:43.970547   17760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:05:43.971085   17760 out.go:298] Setting JSON to true
	I0319 19:05:43.971926   17760 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2842,"bootTime":1710872302,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:05:43.971984   17760 start.go:139] virtualization: kvm guest
	I0319 19:05:43.974254   17760 out.go:97] [download-only-516738] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:05:43.975836   17760 out.go:169] MINIKUBE_LOCATION=18453
	I0319 19:05:43.974446   17760 notify.go:220] Checking for updates...
	I0319 19:05:43.978860   17760 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:05:43.980235   17760 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:05:43.981702   17760 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:05:43.983156   17760 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0319 19:05:43.985688   17760 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 19:05:43.985896   17760 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:05:44.017161   17760 out.go:97] Using the kvm2 driver based on user configuration
	I0319 19:05:44.017179   17760 start.go:297] selected driver: kvm2
	I0319 19:05:44.017184   17760 start.go:901] validating driver "kvm2" against <nil>
	I0319 19:05:44.017474   17760 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:05:44.017530   17760 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18453-10028/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0319 19:05:44.031154   17760 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0319 19:05:44.031199   17760 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 19:05:44.031629   17760 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0319 19:05:44.031775   17760 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 19:05:44.031825   17760 cni.go:84] Creating CNI manager for ""
	I0319 19:05:44.031838   17760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0319 19:05:44.031845   17760 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 19:05:44.031887   17760 start.go:340] cluster config:
	{Name:download-only-516738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-516738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:05:44.031965   17760 iso.go:125] acquiring lock: {Name:mk757175fceba09a5d2cb7ea19c00dcf80754cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:05:44.033680   17760 out.go:97] Starting "download-only-516738" primary control-plane node in "download-only-516738" cluster
	I0319 19:05:44.033697   17760 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 19:05:44.534872   17760 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0319 19:05:44.534918   17760 cache.go:56] Caching tarball of preloaded images
	I0319 19:05:44.535084   17760 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 19:05:44.537029   17760 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0319 19:05:44.537044   17760 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:05:44.728096   17760 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f8942c73bc4cf06adbbee21f15bde53 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0319 19:05:55.565406   17760 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:05:55.565506   17760 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18453-10028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0319 19:05:56.314528   17760 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on crio
	I0319 19:05:56.314873   17760 profile.go:142] Saving config to /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/download-only-516738/config.json ...
	I0319 19:05:56.314903   17760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/download-only-516738/config.json: {Name:mk1cd2e16c0168952bbc51c2cd3d7b6aa9b61899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:05:56.315050   17760 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0319 19:05:56.315203   17760 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18453-10028/.minikube/cache/linux/amd64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-516738 host does not exist
	  To start a cluster, run: "minikube start -p download-only-516738"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-516738
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-144883 --alsologtostderr --binary-mirror http://127.0.0.1:44349 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-144883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-144883
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (103.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-803139 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-803139 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m42.322292589s)
helpers_test.go:175: Cleaning up "offline-crio-803139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-803139
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-803139: (1.047361688s)
--- PASS: TestOffline (103.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-630101
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-630101: exit status 85 (59.021888ms)

                                                
                                                
-- stdout --
	* Profile "addons-630101" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-630101"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-630101
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-630101: exit status 85 (57.572789ms)

                                                
                                                
-- stdout --
	* Profile "addons-630101" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-630101"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (212.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-630101 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-630101 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.334907173s)
--- PASS: TestAddons/Setup (212.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 25.84881ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5c2dl" [33e86949-d2bb-4ead-9b37-bdeedecabf55] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005579234s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9hbsf" [6c3ae126-cbbe-4d86-990a-82e1182780db] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005395917s
addons_test.go:340: (dbg) Run:  kubectl --context addons-630101 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-630101 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-630101 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.446081854s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 ip
2024/03/19 19:10:30 [DEBUG] GET http://192.168.39.203:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-630101 addons disable registry --alsologtostderr -v=1: (1.039526237s)
--- PASS: TestAddons/parallel/Registry (26.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-czpgl" [b3462cb7-a00e-418e-b359-9a98e92336bc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006560785s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-630101
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-630101: (5.889499126s)
--- PASS: TestAddons/parallel/InspektorGadget (11.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 25.81874ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-rxmfc" [ebb99aee-ec48-4d22-a827-17b63f98c4fe] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005745753s
addons_test.go:415: (dbg) Run:  kubectl --context addons-630101 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.89s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.108864ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-pjgds" [b869828d-8013-4fb0-96fb-36e7be67a2d9] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005186551s
addons_test.go:473: (dbg) Run:  kubectl --context addons-630101 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-630101 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.47858469s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.14s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.478875ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-630101 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-630101 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6b5b9f38-a735-4c9c-83d8-27b057275924] Pending
helpers_test.go:344: "task-pv-pod" [6b5b9f38-a735-4c9c-83d8-27b057275924] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6b5b9f38-a735-4c9c-83d8-27b057275924] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.005557069s
addons_test.go:584: (dbg) Run:  kubectl --context addons-630101 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-630101 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-630101 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-630101 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-630101 delete pod task-pv-pod: (1.317037857s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-630101 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-630101 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-630101 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e6a3d28b-9c36-44e8-b32d-ca005e633225] Pending
helpers_test.go:344: "task-pv-pod-restore" [e6a3d28b-9c36-44e8-b32d-ca005e633225] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e6a3d28b-9c36-44e8-b32d-ca005e633225] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004058858s
addons_test.go:626: (dbg) Run:  kubectl --context addons-630101 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-630101 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-630101 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-630101 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.223506493s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-630101 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-630101 --alsologtostderr -v=1: (1.94693727s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-h94gd" [8e09ab7c-fd92-4587-85c9-9cf10b97e200] Pending
helpers_test.go:344: "headlamp-5485c556b-h94gd" [8e09ab7c-fd92-4587-85c9-9cf10b97e200] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-h94gd" [8e09ab7c-fd92-4587-85c9-9cf10b97e200] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-h94gd" [8e09ab7c-fd92-4587-85c9-9cf10b97e200] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 24.005968177s
--- PASS: TestAddons/parallel/Headlamp (25.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-q7hdj" [2b103451-30b6-46e0-b434-9e5370415973] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003815338s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-630101
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-630101: (1.04635023s)
--- PASS: TestAddons/parallel/CloudSpanner (6.06s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (22.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-630101 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-630101 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eb07107d-ab9d-407e-a58f-fa58eb3fb9b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eb07107d-ab9d-407e-a58f-fa58eb3fb9b1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eb07107d-ab9d-407e-a58f-fa58eb3fb9b1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.004804355s
addons_test.go:891: (dbg) Run:  kubectl --context addons-630101 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 ssh "cat /opt/local-path-provisioner/pvc-6a4de478-6c61-4a98-899b-4b8888bdf238_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-630101 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-630101 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-630101 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (22.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ld4j7" [0bb3ac27-4dd0-4ffc-8d11-225f4858d40d] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005536044s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-630101
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-l2zx4" [d174bf0b-4a12-4a7f-ba0f-29e10cfcd8f4] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004512176s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-630101 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-630101 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (60.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-346618 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-346618 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.281663802s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-346618 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-346618 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-346618 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-346618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-346618
--- PASS: TestCertOptions (60.74s)

                                                
                                    
x
+
TestCertExpiration (297.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-428153 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0319 20:24:30.844057   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-428153 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.210400816s)
E0319 20:26:27.883933   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-428153 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-428153 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.93977618s)
helpers_test.go:175: Cleaning up "cert-expiration-428153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-428153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-428153: (1.011407619s)
--- PASS: TestCertExpiration (297.16s)

                                                
                                    
x
+
TestForceSystemdFlag (71.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-910871 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-910871 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.889638444s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-910871 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-910871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-910871
--- PASS: TestForceSystemdFlag (71.87s)

                                                
                                    
x
+
TestForceSystemdEnv (72.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-587385 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-587385 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.524079587s)
helpers_test.go:175: Cleaning up "force-systemd-env-587385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-587385
--- PASS: TestForceSystemdEnv (72.33s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.50s)

                                                
                                    
x
+
TestErrorSpam/setup (44.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-582396 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-582396 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-582396 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-582396 --driver=kvm2  --container-runtime=crio: (44.037595819s)
--- PASS: TestErrorSpam/setup (44.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 stop: (2.312778596s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 stop: (1.426494078s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-582396 --log_dir /tmp/nospam-582396 stop: (1.951517624s)
--- PASS: TestErrorSpam/stop (5.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18453-10028/.minikube/files/etc/test/nested/copy/17301/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481771 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-481771 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m2.744216363s)
--- PASS: TestFunctional/serial/StartWithProxy (62.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481771 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-481771 --alsologtostderr -v=8: (37.078576158s)
functional_test.go:659: soft start took 37.079174283s for "functional-481771" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-481771 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 cache add registry.k8s.io/pause:3.3: (1.209561526s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 cache add registry.k8s.io/pause:latest: (1.054454999s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-481771 /tmp/TestFunctionalserialCacheCmdcacheadd_local2340030172/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cache add minikube-local-cache-test:functional-481771
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 cache add minikube-local-cache-test:functional-481771: (2.024168967s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cache delete minikube-local-cache-test:functional-481771
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-481771
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.827677ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 kubectl -- --context functional-481771 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-481771 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481771 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-481771 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.935615027s)
functional_test.go:757: restart took 34.935755154s for "functional-481771" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-481771 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 logs: (1.52437175s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 logs --file /tmp/TestFunctionalserialLogsFileCmd4179920964/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 logs --file /tmp/TestFunctionalserialLogsFileCmd4179920964/001/logs.txt: (1.525059492s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-481771 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-481771
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-481771: exit status 115 (283.704778ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.193:31223 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-481771 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 config get cpus: exit status 14 (71.125159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 config get cpus: exit status 14 (66.268174ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-481771 --alsologtostderr -v=1]
E0319 19:20:04.834412   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:04.840315   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:04.851173   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:04.871410   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:04.911812   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:04.992178   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:05.152689   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:05.473396   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:20:06.113892   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-481771 --alsologtostderr -v=1] ...
E0319 19:20:25.317097   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
helpers_test.go:508: unable to kill pid 25697: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481771 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-481771 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.215902ms)

                                                
                                                
-- stdout --
	* [functional-481771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:20:03.992486   25605 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:20:03.992596   25605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:20:03.992607   25605 out.go:304] Setting ErrFile to fd 2...
	I0319 19:20:03.992613   25605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:20:03.992812   25605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:20:03.993368   25605 out.go:298] Setting JSON to false
	I0319 19:20:03.994330   25605 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3702,"bootTime":1710872302,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:20:03.994394   25605 start.go:139] virtualization: kvm guest
	I0319 19:20:03.996659   25605 out.go:177] * [functional-481771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 19:20:03.998181   25605 notify.go:220] Checking for updates...
	I0319 19:20:03.998203   25605 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:20:03.999690   25605 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:20:04.001116   25605 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:20:04.002619   25605 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:20:04.004091   25605 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:20:04.005546   25605 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:20:04.007505   25605 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:20:04.007880   25605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:20:04.007917   25605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:20:04.022471   25605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0319 19:20:04.022917   25605 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:20:04.023517   25605 main.go:141] libmachine: Using API Version  1
	I0319 19:20:04.023544   25605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:20:04.023853   25605 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:20:04.024017   25605 main.go:141] libmachine: (functional-481771) Calling .DriverName
	I0319 19:20:04.024292   25605 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:20:04.024583   25605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:20:04.024620   25605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:20:04.038991   25605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0319 19:20:04.039453   25605 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:20:04.039964   25605 main.go:141] libmachine: Using API Version  1
	I0319 19:20:04.039985   25605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:20:04.040331   25605 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:20:04.040517   25605 main.go:141] libmachine: (functional-481771) Calling .DriverName
	I0319 19:20:04.073201   25605 out.go:177] * Using the kvm2 driver based on existing profile
	I0319 19:20:04.074682   25605 start.go:297] selected driver: kvm2
	I0319 19:20:04.074692   25605 start.go:901] validating driver "kvm2" against &{Name:functional-481771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-481771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:20:04.074789   25605 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:20:04.076949   25605 out.go:177] 
	W0319 19:20:04.078406   25605 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0319 19:20:04.079779   25605 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481771 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-481771 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-481771 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.481597ms)

                                                
                                                
-- stdout --
	* [functional-481771] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:19:59.322129   25262 out.go:291] Setting OutFile to fd 1 ...
	I0319 19:19:59.322275   25262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:19:59.322286   25262 out.go:304] Setting ErrFile to fd 2...
	I0319 19:19:59.322292   25262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 19:19:59.322708   25262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 19:19:59.323374   25262 out.go:298] Setting JSON to false
	I0319 19:19:59.324579   25262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3697,"bootTime":1710872302,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 19:19:59.324666   25262 start.go:139] virtualization: kvm guest
	I0319 19:19:59.327146   25262 out.go:177] * [functional-481771] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0319 19:19:59.328803   25262 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 19:19:59.328764   25262 notify.go:220] Checking for updates...
	I0319 19:19:59.330237   25262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:19:59.331690   25262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 19:19:59.333128   25262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 19:19:59.334577   25262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 19:19:59.336034   25262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:19:59.338025   25262 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 19:19:59.338601   25262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:19:59.338651   25262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:19:59.354613   25262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0319 19:19:59.354961   25262 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:19:59.355429   25262 main.go:141] libmachine: Using API Version  1
	I0319 19:19:59.355449   25262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:19:59.355767   25262 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:19:59.355931   25262 main.go:141] libmachine: (functional-481771) Calling .DriverName
	I0319 19:19:59.356174   25262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 19:19:59.356471   25262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 19:19:59.356508   25262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 19:19:59.370117   25262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46421
	I0319 19:19:59.370439   25262 main.go:141] libmachine: () Calling .GetVersion
	I0319 19:19:59.371333   25262 main.go:141] libmachine: Using API Version  1
	I0319 19:19:59.371361   25262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 19:19:59.372601   25262 main.go:141] libmachine: () Calling .GetMachineName
	I0319 19:19:59.372786   25262 main.go:141] libmachine: (functional-481771) Calling .DriverName
	I0319 19:19:59.410596   25262 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0319 19:19:59.412109   25262 start.go:297] selected driver: kvm2
	I0319 19:19:59.412121   25262 start.go:901] validating driver "kvm2" against &{Name:functional-481771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-481771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:19:59.412218   25262 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:19:59.414410   25262 out.go:177] 
	W0319 19:19:59.415761   25262 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0319 19:19:59.417070   25262 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-481771 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-481771 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zlkt8" [4c34d14a-7d9c-4be0-a3d5-7fcc12f3d144] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zlkt8" [4c34d14a-7d9c-4be0-a3d5-7fcc12f3d144] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.004750726s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.193:30985
functional_test.go:1671: http://192.168.39.193:30985: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zlkt8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.193:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.193:30985
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh -n functional-481771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cp functional-481771:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1054084226/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh -n functional-481771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh -n functional-481771 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-481771 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-jl558" [90646cfc-5c68-4e6e-ad3a-2c2b7dfe7ce3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-jl558" [90646cfc-5c68-4e6e-ad3a-2c2b7dfe7ce3] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.122948535s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481771 exec mysql-859648c796-jl558 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-481771 exec mysql-859648c796-jl558 -- mysql -ppassword -e "show databases;": exit status 1 (169.780241ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481771 exec mysql-859648c796-jl558 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-481771 exec mysql-859648c796-jl558 -- mysql -ppassword -e "show databases;": exit status 1 (194.029096ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-481771 exec mysql-859648c796-jl558 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/17301/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /etc/test/nested/copy/17301/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/17301.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /etc/ssl/certs/17301.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/17301.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /usr/share/ca-certificates/17301.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/173012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /etc/ssl/certs/173012.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/173012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /usr/share/ca-certificates/173012.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-481771 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh "sudo systemctl is-active docker": exit status 1 (248.780723ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh "sudo systemctl is-active containerd": exit status 1 (265.970928ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481771 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-481771
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-481771
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481771 image ls --format short --alsologtostderr:
I0319 19:20:12.474556   26280 out.go:291] Setting OutFile to fd 1 ...
I0319 19:20:12.474658   26280 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:12.474666   26280 out.go:304] Setting ErrFile to fd 2...
I0319 19:20:12.474670   26280 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:12.474900   26280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
I0319 19:20:12.475510   26280 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:12.475613   26280 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:12.475979   26280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:12.476016   26280 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:12.490779   26280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43745
I0319 19:20:12.491213   26280 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:12.491793   26280 main.go:141] libmachine: Using API Version  1
I0319 19:20:12.491820   26280 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:12.492132   26280 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:12.492325   26280 main.go:141] libmachine: (functional-481771) Calling .GetState
I0319 19:20:12.494184   26280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:12.494225   26280 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:12.508173   26280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
I0319 19:20:12.508594   26280 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:12.509044   26280 main.go:141] libmachine: Using API Version  1
I0319 19:20:12.509068   26280 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:12.509323   26280 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:12.509489   26280 main.go:141] libmachine: (functional-481771) Calling .DriverName
I0319 19:20:12.509648   26280 ssh_runner.go:195] Run: systemctl --version
I0319 19:20:12.509672   26280 main.go:141] libmachine: (functional-481771) Calling .GetSSHHostname
I0319 19:20:12.512303   26280 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:12.512752   26280 main.go:141] libmachine: (functional-481771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:44:67", ip: ""} in network mk-functional-481771: {Iface:virbr1 ExpiryTime:2024-03-19 20:17:16 +0000 UTC Type:0 Mac:52:54:00:6a:44:67 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-481771 Clientid:01:52:54:00:6a:44:67}
I0319 19:20:12.512783   26280 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined IP address 192.168.39.193 and MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:12.512872   26280 main.go:141] libmachine: (functional-481771) Calling .GetSSHPort
I0319 19:20:12.513047   26280 main.go:141] libmachine: (functional-481771) Calling .GetSSHKeyPath
I0319 19:20:12.513174   26280 main.go:141] libmachine: (functional-481771) Calling .GetSSHUsername
I0319 19:20:12.513300   26280 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/functional-481771/id_rsa Username:docker}
I0319 19:20:12.595430   26280 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:20:12.637529   26280 main.go:141] libmachine: Making call to close driver server
I0319 19:20:12.637540   26280 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:12.637850   26280 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:12.637882   26280 main.go:141] libmachine: (functional-481771) DBG | Closing plugin on server side
I0319 19:20:12.637910   26280 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:12.637927   26280 main.go:141] libmachine: Making call to close driver server
I0319 19:20:12.637938   26280 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:12.638181   26280 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:12.638207   26280 main.go:141] libmachine: (functional-481771) DBG | Closing plugin on server side
I0319 19:20:12.638239   26280 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481771 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-481771  | 7d67524195b37 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-481771  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-481771  | 6cd35817de1b0 | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481771 image ls --format table --alsologtostderr:
I0319 19:20:19.713729   26476 out.go:291] Setting OutFile to fd 1 ...
I0319 19:20:19.713979   26476 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:19.713991   26476 out.go:304] Setting ErrFile to fd 2...
I0319 19:20:19.713995   26476 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:19.714160   26476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
I0319 19:20:19.714705   26476 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:19.714797   26476 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:19.715146   26476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:19.715197   26476 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:19.729259   26476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
I0319 19:20:19.729658   26476 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:19.730157   26476 main.go:141] libmachine: Using API Version  1
I0319 19:20:19.730181   26476 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:19.730533   26476 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:19.730755   26476 main.go:141] libmachine: (functional-481771) Calling .GetState
I0319 19:20:19.732608   26476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:19.732651   26476 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:19.750544   26476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
I0319 19:20:19.750918   26476 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:19.751388   26476 main.go:141] libmachine: Using API Version  1
I0319 19:20:19.751413   26476 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:19.751699   26476 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:19.751871   26476 main.go:141] libmachine: (functional-481771) Calling .DriverName
I0319 19:20:19.752040   26476 ssh_runner.go:195] Run: systemctl --version
I0319 19:20:19.752062   26476 main.go:141] libmachine: (functional-481771) Calling .GetSSHHostname
I0319 19:20:19.754458   26476 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:19.754835   26476 main.go:141] libmachine: (functional-481771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:44:67", ip: ""} in network mk-functional-481771: {Iface:virbr1 ExpiryTime:2024-03-19 20:17:16 +0000 UTC Type:0 Mac:52:54:00:6a:44:67 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-481771 Clientid:01:52:54:00:6a:44:67}
I0319 19:20:19.754865   26476 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined IP address 192.168.39.193 and MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:19.755000   26476 main.go:141] libmachine: (functional-481771) Calling .GetSSHPort
I0319 19:20:19.755165   26476 main.go:141] libmachine: (functional-481771) Calling .GetSSHKeyPath
I0319 19:20:19.755299   26476 main.go:141] libmachine: (functional-481771) Calling .GetSSHUsername
I0319 19:20:19.755443   26476 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/functional-481771/id_rsa Username:docker}
I0319 19:20:19.839285   26476 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:20:19.882877   26476 main.go:141] libmachine: Making call to close driver server
I0319 19:20:19.882904   26476 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:19.883155   26476 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:19.883173   26476 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:19.883183   26476 main.go:141] libmachine: Making call to close driver server
I0319 19:20:19.883190   26476 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:19.883364   26476 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:19.883380   26476 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481771 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226
c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-481771"],"size":"34114467"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927a
c287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"f286912fd50149f319e0e31ef32c4a82096c05ca97c132405460a7301010e3c3","repoDigests":["docker.io/library/1fcc7af15c38dc156aad43d98beff755c473e93953f65a5a22a89f8fb7d9bce7-tmp@sha256:6212a2cf1190c1566ef6775ed89267418bd6c3ae065e58920ee42ef9b201c08d
"],"repoTags":[],"size":"1466017"},{"id":"7d67524195b37561bf4bbb36d5a07ee04cdc7811dc613cf050161e68c486649c","repoDigests":["localhost/my-image@sha256:be8dccd7a6310e1b8d71a1f4d798d7d87c47529d77cbf786c59ea47a242c3e9a"],"repoTags":["localhost/my-image:functional-481771"],"size":"1468599"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee
0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10
f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d44
9841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764
febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"6cd35817de1b0d454b9e9a41e171af6f7afe1688b0da229e1f09220028791bcb","repoDigests":["localhost/minikube-local-cache-test@sha256:a911d57a1e333f0a2bb784163c335c153cb077bf706ab8f4f4e0fffc0eb851db"],"repoTags":["localhost/minikube-local-cache-test:functional-481771"],"size":"3345"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481771 image ls --format json --alsologtostderr:
I0319 19:20:19.486718   26452 out.go:291] Setting OutFile to fd 1 ...
I0319 19:20:19.486846   26452 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:19.486856   26452 out.go:304] Setting ErrFile to fd 2...
I0319 19:20:19.486860   26452 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:19.487064   26452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
I0319 19:20:19.487638   26452 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:19.487746   26452 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:19.488141   26452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:19.488183   26452 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:19.502554   26452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
I0319 19:20:19.502987   26452 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:19.503610   26452 main.go:141] libmachine: Using API Version  1
I0319 19:20:19.503632   26452 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:19.503937   26452 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:19.504119   26452 main.go:141] libmachine: (functional-481771) Calling .GetState
I0319 19:20:19.505976   26452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:19.506028   26452 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:19.519775   26452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37899
I0319 19:20:19.520184   26452 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:19.520673   26452 main.go:141] libmachine: Using API Version  1
I0319 19:20:19.520694   26452 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:19.520973   26452 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:19.521110   26452 main.go:141] libmachine: (functional-481771) Calling .DriverName
I0319 19:20:19.521270   26452 ssh_runner.go:195] Run: systemctl --version
I0319 19:20:19.521290   26452 main.go:141] libmachine: (functional-481771) Calling .GetSSHHostname
I0319 19:20:19.524015   26452 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:19.524481   26452 main.go:141] libmachine: (functional-481771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:44:67", ip: ""} in network mk-functional-481771: {Iface:virbr1 ExpiryTime:2024-03-19 20:17:16 +0000 UTC Type:0 Mac:52:54:00:6a:44:67 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-481771 Clientid:01:52:54:00:6a:44:67}
I0319 19:20:19.524508   26452 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined IP address 192.168.39.193 and MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:19.524652   26452 main.go:141] libmachine: (functional-481771) Calling .GetSSHPort
I0319 19:20:19.524804   26452 main.go:141] libmachine: (functional-481771) Calling .GetSSHKeyPath
I0319 19:20:19.524934   26452 main.go:141] libmachine: (functional-481771) Calling .GetSSHUsername
I0319 19:20:19.525059   26452 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/functional-481771/id_rsa Username:docker}
I0319 19:20:19.607216   26452 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:20:19.656708   26452 main.go:141] libmachine: Making call to close driver server
I0319 19:20:19.656737   26452 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:19.656970   26452 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:19.656991   26452 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:19.657011   26452 main.go:141] libmachine: Making call to close driver server
I0319 19:20:19.657020   26452 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:19.657250   26452 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:19.657290   26452 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:19.657259   26452 main.go:141] libmachine: (functional-481771) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481771 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-481771
size: "34114467"
- id: 6cd35817de1b0d454b9e9a41e171af6f7afe1688b0da229e1f09220028791bcb
repoDigests:
- localhost/minikube-local-cache-test@sha256:a911d57a1e333f0a2bb784163c335c153cb077bf706ab8f4f4e0fffc0eb851db
repoTags:
- localhost/minikube-local-cache-test:functional-481771
size: "3345"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481771 image ls --format yaml --alsologtostderr:
I0319 19:20:12.691479   26304 out.go:291] Setting OutFile to fd 1 ...
I0319 19:20:12.691577   26304 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:12.691587   26304 out.go:304] Setting ErrFile to fd 2...
I0319 19:20:12.691591   26304 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:12.691790   26304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
I0319 19:20:12.693159   26304 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:12.693385   26304 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:12.694190   26304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:12.694227   26304 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:12.708832   26304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
I0319 19:20:12.709245   26304 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:12.709846   26304 main.go:141] libmachine: Using API Version  1
I0319 19:20:12.709872   26304 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:12.710182   26304 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:12.710349   26304 main.go:141] libmachine: (functional-481771) Calling .GetState
I0319 19:20:12.712386   26304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:12.712421   26304 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:12.726578   26304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
I0319 19:20:12.727005   26304 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:12.727441   26304 main.go:141] libmachine: Using API Version  1
I0319 19:20:12.727458   26304 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:12.727733   26304 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:12.727905   26304 main.go:141] libmachine: (functional-481771) Calling .DriverName
I0319 19:20:12.728074   26304 ssh_runner.go:195] Run: systemctl --version
I0319 19:20:12.728094   26304 main.go:141] libmachine: (functional-481771) Calling .GetSSHHostname
I0319 19:20:12.730537   26304 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:12.730890   26304 main.go:141] libmachine: (functional-481771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:44:67", ip: ""} in network mk-functional-481771: {Iface:virbr1 ExpiryTime:2024-03-19 20:17:16 +0000 UTC Type:0 Mac:52:54:00:6a:44:67 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-481771 Clientid:01:52:54:00:6a:44:67}
I0319 19:20:12.730920   26304 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined IP address 192.168.39.193 and MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:12.731014   26304 main.go:141] libmachine: (functional-481771) Calling .GetSSHPort
I0319 19:20:12.731172   26304 main.go:141] libmachine: (functional-481771) Calling .GetSSHKeyPath
I0319 19:20:12.731324   26304 main.go:141] libmachine: (functional-481771) Calling .GetSSHUsername
I0319 19:20:12.731462   26304 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/functional-481771/id_rsa Username:docker}
I0319 19:20:12.816040   26304 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:20:12.858866   26304 main.go:141] libmachine: Making call to close driver server
I0319 19:20:12.858882   26304 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:12.859150   26304 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:12.859199   26304 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:12.859219   26304 main.go:141] libmachine: Making call to close driver server
I0319 19:20:12.859228   26304 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:12.859478   26304 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:12.859493   26304 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:12.859501   26304 main.go:141] libmachine: (functional-481771) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh pgrep buildkitd: exit status 1 (196.811444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image build -t localhost/my-image:functional-481771 testdata/build --alsologtostderr
E0319 19:20:15.075969   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image build -t localhost/my-image:functional-481771 testdata/build --alsologtostderr: (6.143066415s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-481771 image build -t localhost/my-image:functional-481771 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f286912fd50
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-481771
--> 7d67524195b
Successfully tagged localhost/my-image:functional-481771
7d67524195b37561bf4bbb36d5a07ee04cdc7811dc613cf050161e68c486649c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-481771 image build -t localhost/my-image:functional-481771 testdata/build --alsologtostderr:
I0319 19:20:13.113059   26358 out.go:291] Setting OutFile to fd 1 ...
I0319 19:20:13.113477   26358 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:13.113495   26358 out.go:304] Setting ErrFile to fd 2...
I0319 19:20:13.113503   26358 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 19:20:13.113919   26358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
I0319 19:20:13.114953   26358 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:13.115434   26358 config.go:182] Loaded profile config "functional-481771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0319 19:20:13.115766   26358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:13.115813   26358 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:13.130144   26358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
I0319 19:20:13.130565   26358 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:13.131044   26358 main.go:141] libmachine: Using API Version  1
I0319 19:20:13.131067   26358 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:13.131386   26358 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:13.131578   26358 main.go:141] libmachine: (functional-481771) Calling .GetState
I0319 19:20:13.133395   26358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0319 19:20:13.133426   26358 main.go:141] libmachine: Launching plugin server for driver kvm2
I0319 19:20:13.147213   26358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46493
I0319 19:20:13.147656   26358 main.go:141] libmachine: () Calling .GetVersion
I0319 19:20:13.148143   26358 main.go:141] libmachine: Using API Version  1
I0319 19:20:13.148186   26358 main.go:141] libmachine: () Calling .SetConfigRaw
I0319 19:20:13.148524   26358 main.go:141] libmachine: () Calling .GetMachineName
I0319 19:20:13.148700   26358 main.go:141] libmachine: (functional-481771) Calling .DriverName
I0319 19:20:13.148878   26358 ssh_runner.go:195] Run: systemctl --version
I0319 19:20:13.148902   26358 main.go:141] libmachine: (functional-481771) Calling .GetSSHHostname
I0319 19:20:13.151249   26358 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:13.151630   26358 main.go:141] libmachine: (functional-481771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:44:67", ip: ""} in network mk-functional-481771: {Iface:virbr1 ExpiryTime:2024-03-19 20:17:16 +0000 UTC Type:0 Mac:52:54:00:6a:44:67 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-481771 Clientid:01:52:54:00:6a:44:67}
I0319 19:20:13.151650   26358 main.go:141] libmachine: (functional-481771) DBG | domain functional-481771 has defined IP address 192.168.39.193 and MAC address 52:54:00:6a:44:67 in network mk-functional-481771
I0319 19:20:13.151773   26358 main.go:141] libmachine: (functional-481771) Calling .GetSSHPort
I0319 19:20:13.151981   26358 main.go:141] libmachine: (functional-481771) Calling .GetSSHKeyPath
I0319 19:20:13.152132   26358 main.go:141] libmachine: (functional-481771) Calling .GetSSHUsername
I0319 19:20:13.152271   26358 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/functional-481771/id_rsa Username:docker}
I0319 19:20:13.239676   26358 build_images.go:161] Building image from path: /tmp/build.2733385229.tar
I0319 19:20:13.239736   26358 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0319 19:20:13.254856   26358 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2733385229.tar
I0319 19:20:13.260106   26358 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2733385229.tar: stat -c "%s %y" /var/lib/minikube/build/build.2733385229.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2733385229.tar': No such file or directory
I0319 19:20:13.260149   26358 ssh_runner.go:362] scp /tmp/build.2733385229.tar --> /var/lib/minikube/build/build.2733385229.tar (3072 bytes)
I0319 19:20:13.290124   26358 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2733385229
I0319 19:20:13.302574   26358 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2733385229 -xf /var/lib/minikube/build/build.2733385229.tar
I0319 19:20:13.322711   26358 crio.go:315] Building image: /var/lib/minikube/build/build.2733385229
I0319 19:20:13.322756   26358 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-481771 /var/lib/minikube/build/build.2733385229 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0319 19:20:19.165913   26358 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-481771 /var/lib/minikube/build/build.2733385229 --cgroup-manager=cgroupfs: (5.843135591s)
I0319 19:20:19.165986   26358 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2733385229
I0319 19:20:19.182671   26358 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2733385229.tar
I0319 19:20:19.200278   26358 build_images.go:217] Built localhost/my-image:functional-481771 from /tmp/build.2733385229.tar
I0319 19:20:19.200314   26358 build_images.go:133] succeeded building to: functional-481771
I0319 19:20:19.200320   26358 build_images.go:134] failed building to: 
I0319 19:20:19.200395   26358 main.go:141] libmachine: Making call to close driver server
I0319 19:20:19.200415   26358 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:19.200708   26358 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:19.200720   26358 main.go:141] libmachine: (functional-481771) DBG | Closing plugin on server side
I0319 19:20:19.200726   26358 main.go:141] libmachine: Making call to close connection to plugin binary
I0319 19:20:19.200747   26358 main.go:141] libmachine: Making call to close driver server
I0319 19:20:19.200756   26358 main.go:141] libmachine: (functional-481771) Calling .Close
I0319 19:20:19.201041   26358 main.go:141] libmachine: (functional-481771) DBG | Closing plugin on server side
I0319 19:20:19.201063   26358 main.go:141] libmachine: Successfully made call to close driver server
I0319 19:20:19.201075   26358 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.109292807s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-481771
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "228.974252ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "54.779878ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "223.773955ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "54.712342ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image load --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image load --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr: (7.681024236s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image load --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image load --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr: (5.436197719s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.426372238s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-481771
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image load --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image load --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr: (4.483593752s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image save gcr.io/google-containers/addon-resizer:functional-481771 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image save gcr.io/google-containers/addon-resizer:functional-481771 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.703559647s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (28.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-481771 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-481771 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-fjmjg" [408b28f6-e18e-451b-b248-d263f35fa43f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-fjmjg" [408b28f6-e18e-451b-b248-d263f35fa43f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 28.005387876s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (28.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image rm gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.188390589s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdany-port3961297398/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710875999422106271" to /tmp/TestFunctionalparallelMountCmdany-port3961297398/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710875999422106271" to /tmp/TestFunctionalparallelMountCmdany-port3961297398/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710875999422106271" to /tmp/TestFunctionalparallelMountCmdany-port3961297398/001/test-1710875999422106271
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.468656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 19 19:19 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 19 19:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 19 19:19 test-1710875999422106271
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh cat /mount-9p/test-1710875999422106271
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-481771 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3c21b875-65ac-4bd8-92bc-8f9b5609e8e8] Pending
helpers_test.go:344: "busybox-mount" [3c21b875-65ac-4bd8-92bc-8f9b5609e8e8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3c21b875-65ac-4bd8-92bc-8f9b5609e8e8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3c21b875-65ac-4bd8-92bc-8f9b5609e8e8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004548517s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-481771 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh stat /mount-9p/created-by-test
E0319 19:20:07.394987   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdany-port3961297398/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-481771
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 image save --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 image save --daemon gcr.io/google-containers/addon-resizer:functional-481771 --alsologtostderr: (1.393508224s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-481771
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdspecific-port1685259467/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.264597ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdspecific-port1685259467/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh "sudo umount -f /mount-9p": exit status 1 (218.672943ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-481771 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdspecific-port1685259467/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T" /mount1
E0319 19:20:09.955196   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T" /mount1: exit status 1 (324.525048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-481771 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-481771 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2429415721/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 service list
2024/03/19 19:20:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 service list: (1.236842093s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-481771 service list -o json: (1.23458416s)
functional_test.go:1490: Took "1.234669648s" to run "out/minikube-linux-amd64 -p functional-481771 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.193:32693
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-481771 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.193:32693
E0319 19:20:45.797738   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:21:26.758337   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:22:48.678880   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-481771
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-481771
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-481771
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (226s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-218762 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0319 19:24:30.844284   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:30.849561   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:30.859829   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:30.880149   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:30.920415   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:31.000727   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:31.161050   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:31.481469   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:32.121871   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:33.402358   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:35.962593   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:41.082850   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:24:51.323848   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:25:04.834848   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:25:11.804442   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 19:25:32.520767   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 19:25:52.765519   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-218762 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m45.277261316s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (226.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-218762 -- rollout status deployment/busybox: (4.865838039s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-d8xsk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-ds2kh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-qrc54 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-d8xsk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-ds2kh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-qrc54 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-d8xsk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-ds2kh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-qrc54 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-d8xsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-d8xsk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-ds2kh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-ds2kh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-qrc54 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-218762 -- exec busybox-7fdf7869d9-qrc54 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-218762 -v=7 --alsologtostderr
E0319 19:27:14.686140   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-218762 -v=7 --alsologtostderr: (46.410523588s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-218762 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp testdata/cp-test.txt ha-218762:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762:/home/docker/cp-test.txt ha-218762-m02:/home/docker/cp-test_ha-218762_ha-218762-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test_ha-218762_ha-218762-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762:/home/docker/cp-test.txt ha-218762-m03:/home/docker/cp-test_ha-218762_ha-218762-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test_ha-218762_ha-218762-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762:/home/docker/cp-test.txt ha-218762-m04:/home/docker/cp-test_ha-218762_ha-218762-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test_ha-218762_ha-218762-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp testdata/cp-test.txt ha-218762-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m02:/home/docker/cp-test.txt ha-218762:/home/docker/cp-test_ha-218762-m02_ha-218762.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test_ha-218762-m02_ha-218762.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m02:/home/docker/cp-test.txt ha-218762-m03:/home/docker/cp-test_ha-218762-m02_ha-218762-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test_ha-218762-m02_ha-218762-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m02:/home/docker/cp-test.txt ha-218762-m04:/home/docker/cp-test_ha-218762-m02_ha-218762-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test_ha-218762-m02_ha-218762-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp testdata/cp-test.txt ha-218762-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt ha-218762:/home/docker/cp-test_ha-218762-m03_ha-218762.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test_ha-218762-m03_ha-218762.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt ha-218762-m02:/home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test_ha-218762-m03_ha-218762-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m03:/home/docker/cp-test.txt ha-218762-m04:/home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test_ha-218762-m03_ha-218762-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp testdata/cp-test.txt ha-218762-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1848459454/001/cp-test_ha-218762-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt ha-218762:/home/docker/cp-test_ha-218762-m04_ha-218762.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762 "sudo cat /home/docker/cp-test_ha-218762-m04_ha-218762.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt ha-218762-m02:/home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m02 "sudo cat /home/docker/cp-test_ha-218762-m04_ha-218762-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 cp ha-218762-m04:/home/docker/cp-test.txt ha-218762-m03:/home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 ssh -n ha-218762-m03 "sudo cat /home/docker/cp-test_ha-218762-m04_ha-218762-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.490687472s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-218762 node delete m03 -v=7 --alsologtostderr: (16.680254159s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-218762 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-367763 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0319 19:54:30.844389   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-367763 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.876381098s)
--- PASS: TestJSONOutput/start/Command (96.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-367763 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-367763 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-367763 --output=json --user=testUser
E0319 19:55:04.834306   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-367763 --output=json --user=testUser: (7.370213959s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-536094 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-536094 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.010905ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4f70ba5-b14c-4984-a74b-f42d7d8f588f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-536094] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fa12354-0613-4a4d-b0e3-73b1a4db6c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18453"}}
	{"specversion":"1.0","id":"99d6df9b-59b6-48c5-b7b5-07e3ffd632cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b7d6802-250d-40b2-a9dd-77293d13aec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig"}}
	{"specversion":"1.0","id":"77cd458f-3c78-4a3b-86d0-0b831cf0de58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube"}}
	{"specversion":"1.0","id":"e65326c4-48df-4cc6-9121-8084c1105801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"81332cc0-5801-43de-855e-0ad644831270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"539e204e-fffc-426c-a638-578a8c54db5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-536094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-536094
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (91.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-174293 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-174293 --driver=kvm2  --container-runtime=crio: (45.124040173s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-176763 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-176763 --driver=kvm2  --container-runtime=crio: (44.398192939s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-174293
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-176763
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-176763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-176763
helpers_test.go:175: Cleaning up "first-174293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-174293
--- PASS: TestMinikubeProfile (91.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-989980 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-989980 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.698482564s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-989980 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-989980 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-005066 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0319 19:57:33.889998   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-005066 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.732588999s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005066 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005066 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-989980 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005066 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005066 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-005066
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-005066: (2.289549178s)
--- PASS: TestMountStart/serial/Stop (2.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-005066
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-005066: (23.646449208s)
--- PASS: TestMountStart/serial/RestartStopped (24.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005066 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005066 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695944 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0319 19:59:30.844244   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695944 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.945343033s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-695944 -- rollout status deployment/busybox: (4.32899563s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-dlzz4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-qsnxk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-dlzz4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-qsnxk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-dlzz4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-qsnxk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-dlzz4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-dlzz4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-qsnxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-695944 -- exec busybox-7fdf7869d9-qsnxk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-695944 -v 3 --alsologtostderr
E0319 20:00:04.834615   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-695944 -v 3 --alsologtostderr: (40.16215644s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-695944 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp testdata/cp-test.txt multinode-695944:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3232251347/001/cp-test_multinode-695944.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944:/home/docker/cp-test.txt multinode-695944-m02:/home/docker/cp-test_multinode-695944_multinode-695944-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m02 "sudo cat /home/docker/cp-test_multinode-695944_multinode-695944-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944:/home/docker/cp-test.txt multinode-695944-m03:/home/docker/cp-test_multinode-695944_multinode-695944-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m03 "sudo cat /home/docker/cp-test_multinode-695944_multinode-695944-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp testdata/cp-test.txt multinode-695944-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3232251347/001/cp-test_multinode-695944-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt multinode-695944:/home/docker/cp-test_multinode-695944-m02_multinode-695944.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944 "sudo cat /home/docker/cp-test_multinode-695944-m02_multinode-695944.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944-m02:/home/docker/cp-test.txt multinode-695944-m03:/home/docker/cp-test_multinode-695944-m02_multinode-695944-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m03 "sudo cat /home/docker/cp-test_multinode-695944-m02_multinode-695944-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp testdata/cp-test.txt multinode-695944-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3232251347/001/cp-test_multinode-695944-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt multinode-695944:/home/docker/cp-test_multinode-695944-m03_multinode-695944.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944 "sudo cat /home/docker/cp-test_multinode-695944-m03_multinode-695944.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 cp multinode-695944-m03:/home/docker/cp-test.txt multinode-695944-m02:/home/docker/cp-test_multinode-695944-m03_multinode-695944-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 ssh -n multinode-695944-m02 "sudo cat /home/docker/cp-test_multinode-695944-m03_multinode-695944-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-695944 node stop m03: (1.609869544s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695944 status: exit status 7 (443.95469ms)

                                                
                                                
-- stdout --
	multinode-695944
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-695944-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-695944-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-695944 status --alsologtostderr: exit status 7 (453.423227ms)

                                                
                                                
-- stdout --
	multinode-695944
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-695944-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-695944-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:00:50.605216   42995 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:00:50.605454   42995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:00:50.605465   42995 out.go:304] Setting ErrFile to fd 2...
	I0319 20:00:50.605471   42995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:00:50.605663   42995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:00:50.605843   42995 out.go:298] Setting JSON to false
	I0319 20:00:50.605878   42995 mustload.go:65] Loading cluster: multinode-695944
	I0319 20:00:50.605985   42995 notify.go:220] Checking for updates...
	I0319 20:00:50.606258   42995 config.go:182] Loaded profile config "multinode-695944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:00:50.606274   42995 status.go:255] checking status of multinode-695944 ...
	I0319 20:00:50.606680   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.606752   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.626759   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0319 20:00:50.627164   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:50.627772   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:50.627807   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:50.628226   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:50.628437   42995 main.go:141] libmachine: (multinode-695944) Calling .GetState
	I0319 20:00:50.630051   42995 status.go:330] multinode-695944 host status = "Running" (err=<nil>)
	I0319 20:00:50.630065   42995 host.go:66] Checking if "multinode-695944" exists ...
	I0319 20:00:50.630420   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.630461   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.645945   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0319 20:00:50.646342   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:50.646795   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:50.646818   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:50.647157   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:50.647344   42995 main.go:141] libmachine: (multinode-695944) Calling .GetIP
	I0319 20:00:50.649841   42995 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:00:50.650272   42995 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:00:50.650303   42995 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:00:50.650434   42995 host.go:66] Checking if "multinode-695944" exists ...
	I0319 20:00:50.650692   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.650726   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.665310   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45701
	I0319 20:00:50.665634   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:50.666040   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:50.666058   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:50.666407   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:50.666608   42995 main.go:141] libmachine: (multinode-695944) Calling .DriverName
	I0319 20:00:50.666808   42995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 20:00:50.666833   42995 main.go:141] libmachine: (multinode-695944) Calling .GetSSHHostname
	I0319 20:00:50.669412   42995 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:00:50.669793   42995 main.go:141] libmachine: (multinode-695944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:fe", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:58:21 +0000 UTC Type:0 Mac:52:54:00:c6:d0:fe Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-695944 Clientid:01:52:54:00:c6:d0:fe}
	I0319 20:00:50.669830   42995 main.go:141] libmachine: (multinode-695944) DBG | domain multinode-695944 has defined IP address 192.168.39.64 and MAC address 52:54:00:c6:d0:fe in network mk-multinode-695944
	I0319 20:00:50.669945   42995 main.go:141] libmachine: (multinode-695944) Calling .GetSSHPort
	I0319 20:00:50.670114   42995 main.go:141] libmachine: (multinode-695944) Calling .GetSSHKeyPath
	I0319 20:00:50.670263   42995 main.go:141] libmachine: (multinode-695944) Calling .GetSSHUsername
	I0319 20:00:50.670418   42995 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944/id_rsa Username:docker}
	I0319 20:00:50.749329   42995 ssh_runner.go:195] Run: systemctl --version
	I0319 20:00:50.758135   42995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:00:50.776518   42995 kubeconfig.go:125] found "multinode-695944" server: "https://192.168.39.64:8443"
	I0319 20:00:50.776539   42995 api_server.go:166] Checking apiserver status ...
	I0319 20:00:50.776573   42995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 20:00:50.796324   42995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0319 20:00:50.812517   42995 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 20:00:50.812574   42995 ssh_runner.go:195] Run: ls
	I0319 20:00:50.818075   42995 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0319 20:00:50.822348   42995 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0319 20:00:50.822387   42995 status.go:422] multinode-695944 apiserver status = Running (err=<nil>)
	I0319 20:00:50.822398   42995 status.go:257] multinode-695944 status: &{Name:multinode-695944 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 20:00:50.822722   42995 status.go:255] checking status of multinode-695944-m02 ...
	I0319 20:00:50.823330   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.823415   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.838530   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0319 20:00:50.838932   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:50.839452   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:50.839477   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:50.839815   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:50.839995   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .GetState
	I0319 20:00:50.841407   42995 status.go:330] multinode-695944-m02 host status = "Running" (err=<nil>)
	I0319 20:00:50.841421   42995 host.go:66] Checking if "multinode-695944-m02" exists ...
	I0319 20:00:50.841676   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.841704   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.858028   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0319 20:00:50.858435   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:50.858953   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:50.858995   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:50.859327   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:50.859510   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .GetIP
	I0319 20:00:50.862305   42995 main.go:141] libmachine: (multinode-695944-m02) DBG | domain multinode-695944-m02 has defined MAC address 52:54:00:18:4a:2f in network mk-multinode-695944
	I0319 20:00:50.862689   42995 main.go:141] libmachine: (multinode-695944-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:4a:2f", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:59:26 +0000 UTC Type:0 Mac:52:54:00:18:4a:2f Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-695944-m02 Clientid:01:52:54:00:18:4a:2f}
	I0319 20:00:50.862710   42995 main.go:141] libmachine: (multinode-695944-m02) DBG | domain multinode-695944-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:18:4a:2f in network mk-multinode-695944
	I0319 20:00:50.862845   42995 host.go:66] Checking if "multinode-695944-m02" exists ...
	I0319 20:00:50.863119   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.863152   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.881529   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0319 20:00:50.881911   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:50.882320   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:50.882341   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:50.882642   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:50.882818   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .DriverName
	I0319 20:00:50.883041   42995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 20:00:50.883061   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .GetSSHHostname
	I0319 20:00:50.885515   42995 main.go:141] libmachine: (multinode-695944-m02) DBG | domain multinode-695944-m02 has defined MAC address 52:54:00:18:4a:2f in network mk-multinode-695944
	I0319 20:00:50.885886   42995 main.go:141] libmachine: (multinode-695944-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:4a:2f", ip: ""} in network mk-multinode-695944: {Iface:virbr1 ExpiryTime:2024-03-19 20:59:26 +0000 UTC Type:0 Mac:52:54:00:18:4a:2f Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-695944-m02 Clientid:01:52:54:00:18:4a:2f}
	I0319 20:00:50.885919   42995 main.go:141] libmachine: (multinode-695944-m02) DBG | domain multinode-695944-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:18:4a:2f in network mk-multinode-695944
	I0319 20:00:50.886039   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .GetSSHPort
	I0319 20:00:50.886179   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .GetSSHKeyPath
	I0319 20:00:50.886349   42995 main.go:141] libmachine: (multinode-695944-m02) Calling .GetSSHUsername
	I0319 20:00:50.886449   42995 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18453-10028/.minikube/machines/multinode-695944-m02/id_rsa Username:docker}
	I0319 20:00:50.968699   42995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 20:00:50.984743   42995 status.go:257] multinode-695944-m02 status: &{Name:multinode-695944-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0319 20:00:50.984774   42995 status.go:255] checking status of multinode-695944-m03 ...
	I0319 20:00:50.985067   42995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0319 20:00:50.985100   42995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0319 20:00:50.999683   42995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0319 20:00:51.000058   42995 main.go:141] libmachine: () Calling .GetVersion
	I0319 20:00:51.000617   42995 main.go:141] libmachine: Using API Version  1
	I0319 20:00:51.000641   42995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0319 20:00:51.000922   42995 main.go:141] libmachine: () Calling .GetMachineName
	I0319 20:00:51.001124   42995 main.go:141] libmachine: (multinode-695944-m03) Calling .GetState
	I0319 20:00:51.002413   42995 status.go:330] multinode-695944-m03 host status = "Stopped" (err=<nil>)
	I0319 20:00:51.002428   42995 status.go:343] host is not running, skipping remaining checks
	I0319 20:00:51.002436   42995 status.go:257] multinode-695944-m03 status: &{Name:multinode-695944-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-695944 node start m03 -v=7 --alsologtostderr: (31.436237268s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-695944 node delete m03: (1.655973581s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (168.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695944 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0319 20:09:30.843989   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 20:09:47.882797   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 20:10:04.834358   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695944 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m48.392387488s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-695944 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (168.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-695944
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695944-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-695944-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.429143ms)

                                                
                                                
-- stdout --
	* [multinode-695944-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-695944-m02' is duplicated with machine name 'multinode-695944-m02' in profile 'multinode-695944'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-695944-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-695944-m03 --driver=kvm2  --container-runtime=crio: (44.066053868s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-695944
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-695944: exit status 80 (218.507219ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-695944 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-695944-m03 already exists in multinode-695944-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-695944-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.18s)

                                                
                                    
x
+
TestScheduledStopUnix (117.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-392366 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-392366 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.428120913s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392366 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-392366 -n scheduled-stop-392366
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392366 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392366 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-392366 -n scheduled-stop-392366
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-392366
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392366 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-392366
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-392366: exit status 7 (74.399852ms)

                                                
                                                
-- stdout --
	scheduled-stop-392366
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-392366 -n scheduled-stop-392366
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-392366 -n scheduled-stop-392366: exit status 7 (69.593632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-392366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-392366
--- PASS: TestScheduledStopUnix (117.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.941711652 start -p running-upgrade-844458 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0319 20:19:30.844341   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 20:20:04.834777   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.941711652 start -p running-upgrade-844458 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.426057924s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-844458 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-844458 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m36.983408732s)
helpers_test.go:175: Cleaning up "running-upgrade-844458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-844458
--- PASS: TestRunningBinaryUpgrade (226.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833757 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-833757 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (97.469679ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-833757] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833757 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833757 --driver=kvm2  --container-runtime=crio: (1m36.756917033s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-833757 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833757 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833757 --no-kubernetes --driver=kvm2  --container-runtime=crio: (7.324814539s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-833757 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-833757 status -o json: exit status 2 (236.15425ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-833757","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-833757
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833757 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833757 --no-kubernetes --driver=kvm2  --container-runtime=crio: (57.299318079s)
--- PASS: TestNoKubernetes/serial/Start (57.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.236812957 start -p stopped-upgrade-720890 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.236812957 start -p stopped-upgrade-720890 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.170841323s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.236812957 -p stopped-upgrade-720890 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.236812957 -p stopped-upgrade-720890 stop: (2.127758568s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-720890 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-720890 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.353946941s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-833757 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-833757 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.351091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.604845458s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-833757
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-833757: (1.604627592s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833757 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833757 --driver=kvm2  --container-runtime=crio: (40.929888615s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-720890
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-720890: (1.043208719s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-833757 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-833757 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.584365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (63.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-746219 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-746219 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.720456726s)
--- PASS: TestPause/serial/Start (63.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-378078 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-378078 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.027142ms)

                                                
                                                
-- stdout --
	* [false-378078] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18453
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 20:23:00.292184   52957 out.go:291] Setting OutFile to fd 1 ...
	I0319 20:23:00.292459   52957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:23:00.292470   52957 out.go:304] Setting ErrFile to fd 2...
	I0319 20:23:00.292475   52957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 20:23:00.292690   52957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18453-10028/.minikube/bin
	I0319 20:23:00.293251   52957 out.go:298] Setting JSON to false
	I0319 20:23:00.294142   52957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7478,"bootTime":1710872302,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0319 20:23:00.294205   52957 start.go:139] virtualization: kvm guest
	I0319 20:23:00.296738   52957 out.go:177] * [false-378078] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0319 20:23:00.298268   52957 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 20:23:00.299561   52957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 20:23:00.298294   52957 notify.go:220] Checking for updates...
	I0319 20:23:00.302323   52957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18453-10028/kubeconfig
	I0319 20:23:00.303796   52957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18453-10028/.minikube
	I0319 20:23:00.305252   52957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0319 20:23:00.306683   52957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 20:23:00.308560   52957 config.go:182] Loaded profile config "force-systemd-flag-910871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:23:00.308660   52957 config.go:182] Loaded profile config "kubernetes-upgrade-853797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0319 20:23:00.308742   52957 config.go:182] Loaded profile config "pause-746219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0319 20:23:00.308842   52957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 20:23:00.344521   52957 out.go:177] * Using the kvm2 driver based on user configuration
	I0319 20:23:00.345782   52957 start.go:297] selected driver: kvm2
	I0319 20:23:00.345792   52957 start.go:901] validating driver "kvm2" against <nil>
	I0319 20:23:00.345803   52957 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 20:23:00.347627   52957 out.go:177] 
	W0319 20:23:00.348977   52957 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0319 20:23:00.350567   52957 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-378078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-378078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-378078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378078"

                                                
                                                
----------------------- debugLogs end: false-378078 [took: 2.942378159s] --------------------------------
helpers_test.go:175: Cleaning up "false-378078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-378078
--- PASS: TestNetworkPlugins/group/false (3.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (157.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-414130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
E0319 20:25:04.834773   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-414130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (2m37.419993134s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (157.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (98.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-421660 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-421660 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m38.30972333s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (98.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-414130 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [45590dea-8794-4ee4-a7bc-df1061021cde] Pending
helpers_test.go:344: "busybox" [45590dea-8794-4ee4-a7bc-df1061021cde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [45590dea-8794-4ee4-a7bc-df1061021cde] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004252558s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-414130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-414130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-414130 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-421660 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c6a03291-9dc2-4996-b992-a06b76d63603] Pending
helpers_test.go:344: "busybox" [c6a03291-9dc2-4996-b992-a06b76d63603] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c6a03291-9dc2-4996-b992-a06b76d63603] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004698876s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-421660 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-421660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-421660 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-385240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0319 20:29:30.843548   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-385240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (57.647681434s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (698.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-414130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-414130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (11m38.543629442s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-414130 -n no-preload-414130
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (698.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ccaef1b-8644-49ea-94ce-4dbadec1a03e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ccaef1b-8644-49ea-94ce-4dbadec1a03e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004197483s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-385240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-385240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (534.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-421660 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-421660 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (8m54.446503765s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-421660 -n embed-certs-421660
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (534.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-159022 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-159022 --alsologtostderr -v=3: (1.527326564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-159022 -n old-k8s-version-159022: exit status 7 (74.573143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-159022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (502.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-385240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0319 20:34:30.843816   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
E0319 20:35:04.834141   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 20:39:30.843686   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-385240 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (8m22.421263189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-385240 -n default-k8s-diff-port-385240
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (502.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-587652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-587652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (1m1.755228734s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0319 20:55:04.834453   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m25.329495094s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-587652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-587652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.300183437s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-587652 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-587652 --alsologtostderr -v=3: (10.702977444s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-587652 -n newest-cni-587652
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-587652 -n newest-cni-587652: exit status 7 (87.22308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-587652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-587652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-587652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (40.635969445s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-587652 -n newest-cni-587652
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-378078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-76sxg" [ff5d7c6c-4437-4f19-9d33-134b194316d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-76sxg" [ff5d7c6c-4437-4f19-9d33-134b194316d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004308452s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-378078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-587652 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.609762345s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-587652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-587652 --alsologtostderr -v=1: (1.011184713s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-587652 -n newest-cni-587652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-587652 -n newest-cni-587652: exit status 2 (301.479695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-587652 -n newest-cni-587652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-587652 -n newest-cni-587652: exit status 2 (322.886854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-587652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-587652 -n newest-cni-587652
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-587652 -n newest-cni-587652
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (114.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m54.724067659s)
--- PASS: TestNetworkPlugins/group/calico/Start (114.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (135.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0319 20:57:33.000651   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.005923   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.016215   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.036441   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.076743   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.157312   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.317743   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:33.637917   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:34.278924   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:35.559224   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:38.119755   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
E0319 20:57:43.240457   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m15.92559785s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (135.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8dq4n" [102d6664-1bcf-45f7-88aa-07d227db8d34] Running
E0319 20:57:53.481532   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007403818s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-378078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b489l" [6d929710-aa4f-4c02-b679-0785b8b55a26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b489l" [6d929710-aa4f-4c02-b679-0785b8b55a26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005443038s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-378078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m7.684699586s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hrpfl" [4bb20dff-00f8-4f0c-be65-9687ebac46d5] Running
E0319 20:58:49.628862   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:49.634173   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:49.644422   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:49.664669   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:49.704958   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:49.785263   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:49.945690   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:50.266273   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:50.907030   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00581585s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-378078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dx8sv" [9b78818c-a16f-4fda-898e-0b882c1b6764] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0319 20:58:52.188221   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:54.749098   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:58:54.922695   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/no-preload-414130/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dx8sv" [9b78818c-a16f-4fda-898e-0b882c1b6764] Running
E0319 20:58:59.870227   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006076605s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-378078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-378078 "pgrep -a kubelet"
E0319 20:59:10.110641   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dtsgw" [7a55f63d-c229-41b6-812c-29b0571d4401] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dtsgw" [7a55f63d-c229-41b6-812c-29b0571d4401] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005699974s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.221076737s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-378078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (123.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0319 20:59:30.591047   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
E0319 20:59:30.844318   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/functional-481771/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-378078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m3.70140233s)
--- PASS: TestNetworkPlugins/group/bridge/Start (123.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-378078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bn4mz" [fa26c3a0-8dd8-4f28-b500-3cad2d6e5bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bn4mz" [fa26c3a0-8dd8-4f28-b500-3cad2d6e5bfc] Running
E0319 20:59:47.885371   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004799645s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-378078 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-378078 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.179225233s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-378078 exec deployment/netcat -- nslookup kubernetes.default
E0319 21:00:04.834703   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/addons-630101/client.crt: no such file or directory
E0319 21:00:11.551669   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-378078 exec deployment/netcat -- nslookup kubernetes.default: (10.166990973s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8fqbl" [794c67db-2cbf-4447-ad7d-99b533be7f5a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005541884s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-378078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9l94s" [c2ed48a2-35dc-4cca-bad3-b8b9d6dd5298] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9l94s" [c2ed48a2-35dc-4cca-bad3-b8b9d6dd5298] Running
E0319 21:00:59.725011   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004390141s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-378078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-378078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-378078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-crr4f" [3bf8221c-6e85-4de3-95c1-2cdbf6b02dfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0319 21:01:29.642391   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/auto-378078/client.crt: no such file or directory
E0319 21:01:33.472361   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/old-k8s-version-159022/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-crr4f" [3bf8221c-6e85-4de3-95c1-2cdbf6b02dfb] Running
E0319 21:01:34.763050   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/auto-378078/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003226296s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-378078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0319 21:01:40.685401   17301 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18453-10028/.minikube/profiles/default-k8s-diff-port-385240/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-378078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.30.0-beta.0/binaries 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
181 TestImageBuild 0
208 TestKicCustomNetwork 0
209 TestKicExistingNetwork 0
210 TestKicCustomSubnet 0
211 TestKicStaticIP 0
243 TestChangeNoneUser 0
246 TestScheduledStopWindows 0
248 TestSkaffold 0
250 TestInsufficientStorage 0
254 TestMissingContainerUpgrade 0
275 TestStartStop/group/disable-driver-mounts 0.14
279 TestNetworkPlugins/group/kubenet 3.1
287 TestNetworkPlugins/group/cilium 3.41
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-502023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-502023
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-378078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-378078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-378078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378078"

                                                
                                                
----------------------- debugLogs end: kubenet-378078 [took: 2.963164735s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-378078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-378078
--- SKIP: TestNetworkPlugins/group/kubenet (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-378078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-378078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-378078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-378078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378078"

                                                
                                                
----------------------- debugLogs end: cilium-378078 [took: 3.262960364s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-378078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-378078
--- SKIP: TestNetworkPlugins/group/cilium (3.41s)

                                                
                                    
Copied to clipboard